Post
Topic
Board Bitcoin Discussion
Re: Why is bitcoin proof of work parallelizable ?
by
Forp
on 05/10/2011, 10:37:46 UTC
It is possible to design a proof-of-work scheme where being part of a pool doesn't provide any (or negligible) advantage, aside from the benefit of having a more predictable revenue stream.  Perhaps that's what the OP is wondering about.
Well exactly. Joining a Bitcoin mining pool doesn't increase your revenue, it just provides you with a more predictable revenue stream.

From the discussion I realize that I did not properly describe my core goal. It is not about mining or increasing revenue or making them more predictable. It is rather about undstanding the implications of the current choice improving the stability and speed of statistical convergence of the longest chain.  The discussion here thus far was very helpful and enables me to rephrase my thoughts, hopefully in a better way.

In a non-parallelizable PoW, we will have, say, 1.000.000 processing units all competing individually for a block. Some are faster, some are slower, but they do not differ so widely in performance. Every processing unit corresponds to a single processor (no parallelization advantage for a GPU or a multi core; however a single person might own several competing processing units, which might sit on a single die or a single GPU or several racks).

In a parallelizable PoW we will have a much smaller number of processing units competing for a block. Many of the former units will cooperate by the mere fact that they are sitting inside of a GPU and are parallelizing their work. Other units are even stronger, since they form pools. Moreover, we see a big variance in performance (ranging from the Celeron miner to the 100 racks x 4 high end GPU x 256 GPU processing cores miner).

Now, how do these two worlds compare?

1) In the parallelizable world it is POSSIBLE to form a processing unit which has more than 50% of the hashing power. You simply buy as many GPUs as necessary and that's it. In the non-parallelizable world this is not possible. The best you can do is to overclock your single core or to build specific SHA processor.

2) As a result of (1), the distribution of processing power in the two cases are very much different and thus the two algorithms will definitely behave differently.

Current state: I am trying to understand this difference. My (yet unproven) mathematical guts feeling is, that the convergence in the non-parallelizable setting is better (I am referring to the thoughts very informally described on pages 7 and 8 of Satoshi's white paper) and the system is more robust against some forms of attacks. I do not have a completely thought through concept yet.

The discussion here is very helpful for this. Thank you!