Post
Topic
Board Hardware
Re: Algorithmically placed FPGA miner: 255MH/s/chip, supports all known boards
by
kano
on 18/10/2012, 02:42:26 UTC
This is the double compression fallacy.  The only way this works is by my servers doing part of the hashing work, in which case… what the heck is the point?

First, this isn't a compression fallacy.

Yes, that's exactly what it is.

I'm not going to write a post explaining the double compression fallacy to you.


You have the bulk of the data already (the work)

Nonces-which-are-shares are statistically random and therefore incompressible.  Every possible 32-bit integer is equally likely to be the solution to a randomly chosen piece of work.  You're trying to claim that knowing which piece of work it solves somehow effortlessly adds information.  It doesn't.

Given a nonce-to-be-transmitted, the Shannon entropy of the additional foreknowledge of the work which it solves is exactly zero bits unless you actually do the work -- subject to the assumption that SHA-256 is a one-way hash function.  So the only way this is not double compression is if you've somehow found a way to invert SHA-256 and it isn't a one-way hash function anymore.  If you have figured this out, you shouldn't be wasting your time here on the forum -- you should be coding up that inversion function, mining 99% of the bitcoins, and rolling in piles of money.

Call me when you're rolling in piles of money.

Hmm - so since there seems to be an interesting discussion going on here - I thought I'd add into it Smiley

Firstly, yep the amount of money made from this would probably be pretty small ... but that's not my point of interest Smiley

ET is using Luke-jr's game of word play correctly saying that it is a double compression fallacy

'Compressing' 2 nonces into a single one is quite simple taking the stance of having the server do the missing work that wasn't sent back - so it isn't compressing, but rather working around losing half the data.

When you hash a block header you simply roll the nonce value from 0 to ~4billion (32 bits 2^32) and from that you know which nonce value you want - which clearly EVERYONE knows ...

So e.g. putting 2 half nonces into a full nonce, then having to regenerate the missing half of each nonce (65536 hashes x two) is really something quite simple for any CPU to do - since it is 32768 time faster for the CPU to do both, than having to work out the full nonce for one of them from scratch
As long as the server's hash rate is at worst 32768 times smaller than the network hash rate sending data to it, the servers can handle that.
So if a server can hash at 20Mh/s then it can handle 650GH/s of incoming half missing nonce data.

So ... I don't see the flaw in this concept (other than if you are totally fixated on money) and the fact that it isn't 'compression' - it's simply throwing away data that can be regenerated within an allowable short amount of time.