GTX 1070 is 5.25GH/s on XVC (Blake-256 8 round), pulling 150W - this gives it a MH/s/W value of 35MH/s/W.
I'd like to stress that it's 14nm. With my full-custom design on one of my 28nm FPGAs, I get 2.1GH/s at 24W - this gives it an MH/s/W value of 87.5MH/s/W.
As it is, this fight is one-sided. If they had been manufactured on the same node, it wouldn't be a fight - it would be an execution.
well not fair to compare a gpu with fpga, fpga can do well one thing at time, then you need to reprogram it, gpu can do multiple things
it will consume more energy because of that, if gpu were specialized only on mining, they would be just asic, so yes it's not all about nm productive process
Yeah, but my point was, from a mining perspective, a GPU and an FPGA can both mine many algos. The FPGA may be somewhat more restricted in selection, but it can still switch. So comparing raw hash/watt as a measure of merit is faulty, unless what I pointed out holds as well.