Post
Topic
Board Mining (Altcoins)
Re: CCminer(SP-MOD) Modded NVIDIA Maxwell kernels.
by
bensam1231
on 20/07/2016, 05:46:30 UTC
YOU are arguing semantics - the point was that they can do it.

Also, just because Eth is better on GPU than most FPGA doesn't mean shit - it CAN be done on FPGA, it's just not as good. It's better for some things, not for others.

They can do it if they're capable of actually doing it. That was the whole point of what we're talking about, you're taking a position that a FPGA is just a better GPU. I took the position that a FPGA is somewhere in between a GPU and a ASIC, in that it can do more then a ASIC, but not nearly as much as a GPU (hence a different class they represent and why you can't actively compared a FPGA to a GPU).

Literally just proved the point by showing that FPGAs can't do everything a GPU can do, you said it's semantics.

Quote
FPGAs are in a completely different class... You could lump ASICs into that comparison too. They also can mine multiple algos... They just have to be built from the ground up each and every time. To that extent, so do miners for GPUs (depending on how different the algo is from other ones already made), but the time requirement is quite a bit different.

The whole memory bit was about needing to buy different FPGAs for different algos, much like ASICs, because depending on what you're mining, a FPGA can't always do it. You don't need a new FPGA for every algo, but you do for others... Still once again somewhere in between GPUs and ASICs.

And a 2GB 370X can't do everything a 390X can do -- are they in different "classes" now?

I didn't say an FPGA was a GPU, far from it - I took the point that they are similar from a MINING standpoint.

Considering they can mine the same coins... No...

Basically GPUs within the last 3-5 generations have all been capable of mining the same coins, the only possible exception to this is recently with Ethereum and memory issues. Sometimes miners can even go back even more generations then that.

They aren't any similar from a mining standpoint then a ASIC is, which is where this all stemmed from. You pointed out how awesome FPGAs are and were gloating about efficiency over GPUs, when it really doesn't matter anymore then the efficiency of ASICs as you don't actively compare all three to each other because they're in different classes for all the reasons we covered already.

GTX 1070 is 5.25GH/s on XVC (Blake-256 8 round), pulling 150W - this gives it a MH/s/W value of 35MH/s/W.

I'd like to stress that it's 14nm. With my full-custom design on one of my 28nm FPGAs, I get 2.1GH/s at 24W - this gives it an MH/s/W value of 87.5MH/s/W.

As it is, this fight is one-sided. If they had been manufactured on the same node, it wouldn't be a fight - it would be an execution.

well not fair to compare a gpu with fpga, fpga can do well one thing at time, then you need to reprogram it, gpu can do multiple things

it will consume more energy because of that, if gpu were specialized only on mining, they would be just asic, so yes it's not all about nm productive process