Post
Topic
Board Mining (Altcoins)
Re: Number 9! Ninth altcoin thread. Back to the moon Baby!
by
DrG
on 29/10/2020, 09:09:59 UTC
If it was entirely VRAM based then an ASIC would simple need HBM and a cheap FPGA or CPU to mine beyond that capable by a video card. The cores have to actually calculate the solutions and the RAM is needed to index that table. Clearly the core count is significantly higher because they were able to increase the efficiency even on the same node, so more of that 16GB of memory will be utilized. You could pair a RX460 with HBM memory and map a large bus to the memory it would still crawl. You need all parts running for ETH (even though everybody always parrots that core is irrelevant).
ETHash requires 8192 bytes accessed in VRAM per hash, with 512 GB/s bandwidth it's impossible to have more than 67.1 MH/s, no matter what GPU cores can do. Infinity cache can't help when 4 GB of memory is accessed randomly. FPGAs with HBM exist, they have similar bandwidth and hashrates at much lower power consumption (51 MH/s @ 70 watts).

I'm on a phone right now so I'll take your word for the napkin math. What was the theoretical maximum for Radeon 7 cards if strictly going by memory bandwidth? If it is indeed that close to the bandwidth maximum threshold, then perhaps AMD did not want the cards going to miners. Maybe that was a more of a reason for going GDDR6 instead of HBM2 cost factors. Yields for GDDR6 and GDDR6x have been pretty good from Samsung as it's pretty mature at this point (at least 16Gb).