Post
Topic
Board Mining (Altcoins)
Re: SILENTARMY v5: Zcash miner, 115 sol/s on R9 Nano, 70 sol/s on GTX 1070
by
xeridea
on 18/11/2016, 16:33:41 UTC


Scrypt GPU mining ended in the fall of 14 without private kernels. x11 started up shortly there after, became unprofitable at the beginning of winter. Gridseed weren't ASICs either, the first ones weren't very profitable or good. You may have just remembered those little USB things coming out and thought 'well those were ASICs', they weren't. There were a lot of really bad ASICs. Gridseeds were never a good deal.

Unless you were running private kernels yourself, it wasn't happening.

What other algo are you looking at that's mature? Dagger doesn't count. That's a very niche scenario and it's bound almost exclusively by bus width. The GPUs never get a chance to even be close being fully utilized.

R9-290 has a 512bit bus as was already mentioned.

Who tests GPUs on sha-256? How about trying something remotely relevant to the discussion like say NeoS, Lyra2v2, or even x11. People haven't made optimized miners for Sha in years. As mentioned before if you're talking about 'theoretical usage' scenarios, video games are a very good example of that as GPUs are made to run as fast as possible on them.

Memory usage doesn't need to be about bandwidth or bus width, it could just be the total memory usage as well. Not just that, it doesn't need to be restricted JUST to throughput, it can utilize memory and still do a lot of processing on GPUs. At this point though you're just making shit up and theorycrafting again.

You can blame latency all you want, but Fury not only has a 4096 bit bus, but also gobs of memory bandwidth, it's not eight times faster then R9-290 or even twice as fast. It's not just all about memory speeds here or even latency.

 The Gridseed 3355 WAS in fact an ASIC - and on scrypt it was more efficient than anything GPU based at the time by quite a bit. single side of an "80 blade" would pull 2.5 Mhash/sec at 40 watts where the best GPUs of the time were pulling less than half that at a LOT more power (7990 was an exception with it's pair of cores, it could actually manage a bit more than half the hashrate but pulled a TON more power to do so).

 Dagger (ETH) isn't "bus width limited, it's memory access limited - NOT the same thing  or the RX 480 wouldn't even be close to matching the R9 290 on hashrate.

 For MOST usage, the Fury is a LOT faster than the R9 290 - but on ETH it's barely in the same ballpark despite the much higher "in theory" memory bandwidth. *SOMETHING* certainly keeps it uncompetative with much older cards with lower rated memory bandwidth.



The Fury cards have HBM, which has a lot higher memory bandwidth, but higher latency.  Eth is sensitive to latency.  This is also why 1080 sucks at Eth, the GDDR5X doesn't have much more bandwidth, but has higher latency.  Tightening memory timings on Eth or Zcash give you speed boost.  Games aren't really affected by latency as much, just raw bandwidth.  HBM2 will be better, though Vega 10 may or may not have that much more bandwidth.  There is a new way of accessing HBM2 that reduces latency some though, if application is coded for it, so we will see how things go.