Post
Topic
Board Mining (Altcoins)
Re: Ethereum 30% hashrate drop for RX400/RX500 incoming soon?
by
Vann
on 23/06/2017, 11:17:53 UTC

I think it has to do with memory clocks on the cards. AMD RX 580 clocks top out at ~2100 MHz, while Nvidia clocks run at more than 5000 MHz.

so ETH requires more the core clock which is close between the cards, ZEC uses both (core + memory) and there Nvidia comes in advantage because of high memory clock.
wondering if some miners could be optimized for core that AMD cards can raise their S/s.

Actually ETH runs better with a lower core clock speed compared to ZEC. It also must be something about the difference in memory architecture. My MSI 1080 Gaming X will do ~560 H/s on ZEC, but only ~25 MH/s on ETH. The difference is the 1080's use GDDR5X, while the 1070's use GDDR5 like the RX 580. The 1070's also run at a higher core clock compared to the RX 580's.

ZEC is core count and speed dependent so Nvidia does better at it. ETH is memory bandwidth and speed dependent and so AMD's lower core count/speed does not impact it much.  The memory numbers you see reported as 7000 or 8000mhz are just 4x of the actual memory clocks of 1750-2000mhz. Both AMD and NVIDIA use essentially the same GDDR5X memory - similar chips similar clocks.

The RX 4XX/5XX use GDDR5, same as the Nvidia 1060/1070's. The Nvidia 1080/1080ti's use GDDR5X memory, which is different than GDDR5.

Quote
In January 2016, JEDEC standardized GDDR5X SGRAM.[2] GDDR5X targets a transfer rate of 10 to 14 Gbit/s per pin, twice that of GDDR5.[3] Essentially, it provides the memory controller the option to use either a double data rate mode that has a prefetch of 8n, or a quad data rate mode that has a prefetch of 16n.[4] GDDR5 only has a double data rate mode that has an 8n prefetch.[5] GDDR5X also uses 190 pins per chip; (190 BGA).[4] By comparison, standard GDDR5 has 170 pins per chip; (170 BGA).[5] It therefore requires a modified PCB.

https://en.wikipedia.org/wiki/GDDR5_SDRAM