Post
Topic
Board Mining (Altcoins)
Re: SRBMiner Cryptonight AMD GPU Miner V1.4.6
by
cryptosize
on 23/04/2018, 16:21:56 UTC
1.) Nope, and don't think he's going to, but never know later on.

Far as claymore 9.7 goes.. probably some shady coding he doesn't want anyone to know, or just wants to keep it away from everyone for fee's possibly, never know.

2.) I did notice on 14.4 drivers that my vram was more around 1GB and switching over to 15.7.1 it has 2047MB for some reason detected on all the other miners, but claymore did find 2047 on both drivers..
     so determining how it gets the m1 and m2 should be an easy fix, I know for a fact I only have 512MB dedicated, which is also shown in m2.. I'll have a look at which way xmrig/stak is polling the memory.  iirc this has been an issue with those for quite awhile.. I've never been able to get any miner to work without dying except for claymore on 15.7.1, and believe me I am trying to find a workaround that doesn't involve switching and mangling drivers around to make something work.

How much memory does 14.4 show you compared to 15.7.1?
1) The thing is, Claymore recently removed the devfee from his miner (v11.3). Apparently he's leaving the scene, which means no more support.

It sucks that he doesn't want to release the source code...

2) 14.4 detects the entire memory, while 15.7.1 says only 384MB of VRAM is available.

I really want to know how Claymore v9.7 detects the entire memory with 15.7.1 drivers... yet another secret hiding in the closed-source binary.

Yeah I did notice he took out the fee's at least for 3GB or under cards i think..

But how much memory do you have exactly in total when it's correct? Then I might be able to finally dig into it and see if i can get something going.
Since mines actually doing the opposite for some reason.. on 14.4 it's showing 1024 iirc, and 2047 for 15.7.1...not sure why it isn't 2048 though..
766MB IIRC.

I have a discrete GPU, so I'm not sure about your APU.

Also, Claymore v9.7 is the only miner that maxes out my GPU utilization/D3D usage (99-100%) in HWiNFO64, without spilling data to the main/CPU RAM (which drops hashrate a lot because of PCI-e bottlenecks).

Do you know if it's possible to program VLIW GPUs in pure assembly? IIRC, GCN GPUs support shader intrinsics, which is kinda like console/code to the metal programming:

https://www.gamersnexus.net/guides/2646-shader-intrinsic-functions-bypass-abstraction-layers