Post
Topic
Board Mining (Altcoins)
Re: DIY FPGA Mining rig for any algorithm with fast ROI
by
GPUHoarder
on 12/05/2018, 18:51:55 UTC

Do you really think that they have in that box a single chip dissipating 1kW with 2 USB, 1 Ethernet and 1 HDMI port?
 

To be fair, I've viewed that website in the past and I did not see their ASIC chip specifications. At least to me, their specifications were for the unit as a whole.


What’s wrong with 900W to 1kw per hour exactly? Other than being pedantic I think that saying consumption is 0.9-1 KWH is understood.

HBM/HBC doesn’t really change the bandwidth game to that level. It is possible to build, as I said, in either configuration. 1TB/s HBM may be coming up in the next generation of chips/interposers, and one could achieve those numbers with 4-8 physical ASICs + HBM, each running 125W. Latency is annoying but manageable. I also didn’t see a price point listed other than 2.5-3x cheaper, but without an exact price it makes all of the “can the speeds be achieved” less relevant. Of course they can for an arbitrary price.

Next gen GDDR6/HBM based GPUs will almost certainly achieve 75-85MH (that’s in the 600-800GB/s range, Titan V for example). Their GPU comparison performance numbers are based on 32MH/GPU - so say a previous $200 MSRP 570. But they probably used $500/hosted GPU range prices to make their numbers look better. 16*500/3 = $2500-3000 range. As far as power, expect 7 next gen GPUs to use about the same 1000W to make those numbers. All that I am nearly certain of is this isn’t one massive chip, it is many like every other ASIC for mining.

Back on topic, the memory bandwidth war is going to continue to be apple’s to apple’s across the board - ASIC/GPU/FPGA are all playing the same game for Ethash, etc. It simply comes down to how low of margins the manufacturer is willing to go and how much effort the miner is willing to put in vs. a turn key solution. At least unless someone wants to unleash more than 128 VU9P based systems fully interconnected and achieve 32 GH. Oh wait - that would cost more than the equivalent GPUs.

The FPGA advocates here have not been talking about those algorithms, but the ones that are not inherently ASIC resistant but have chosen to employ regular change to ward of ASICs. FPGAs will almost certainly be able to perform a set of calculations much faster than GPUs.