Post
Topic
Board Bitcoin Discussion
Re: Why is bitcoin proof of work parallelizable ?
by
Forp
on 05/10/2011, 20:13:24 UTC
To begin, the discussion and the reference to the proof-of-stakes thread, is very helpful to me. Thank you.

This isn't about parallelizable vs. non-parallelizable computations. Performance in serial computations doesn't scale linearly with more computing cores, but this is irrelevant. This is about the process of block finding, which is why I asked if your system diverges fundamentally in the notion that blocks are something found once in a while by people on the network. If not then it's really "if Billy and Sally each have an apple, then that's two apples" math - if in a given scenario (not in distinct scenarios) two people find 1 block each, then both of them together finds 2 blocks. If a network of 4000 people find 4000 blocks per month, each finds on average 1 block per month. This isn't enough data to know the distribution (it's possible one person finds all 4000) but the best scenario is when each finds close to 1.

It also means that if in a given situation 4000 people find 4000 blocks, each finding about 1, then if I join in it would only be fair if I also find about 1 (or, more precisely, that each will now find 4000/4001).

Agreed. I really guess we somehow got stuck in a misunderstanding, I might have caused.

Yes, with parallelizable PoW you can overwhelm the network given enough time and money. My contention is that non-parallelizable makes the problem worse, not better. With fully serial only the fastest one will do anything, so noone else will be incentivized to contribute his resources. So this one person can do the attack, and even if he's honest, it's only his resources that stand against a potential attacker (rather than the resources of many interested parties).

Agreed. A sequential deterministic PoW does not do the job for the obvious reasons you are giving. We need both (randomization of block winners AND non-parallelizability) and I am curious how this can be done.

The issue I take is: A very high number of CPUs has a different effect for parallelizable PoWs than for non-parallelizable PoWs. (The "Bill Gates" attack).

I'm still not completely sure what the requirements are, this whole discussion has been confusing to me. But yes, to me it seems that from a "back to basics" viewpoint a serial computation only makes it easier for one entity to dominate the blockchain, making the "better security" requirement impossible. Again, if multiple computers don't give more power over the network, it means the attacker doesn't have to compete against multiple computers, only against one.

We are reaching common ground. In my model, multiple computers do give more power to the network - but not due to the effect that a single PoW is solved faster in time. When we add computers to the network, the PoWs I am thinking of, must be adapted, as in normal Bitcoin. However, the effect is that the probabilities for finding a solution are rearranged not the overall time for solving a block.

You mean an alternative Bitcoin-like currency, or something that doesn't look anything like it? If the latter I doubt this will be applicable, if the latter I can only speculate unless you give more details about the application.

The Bitcoin code progresses slowly, probably mostly because of the sophistication of the code, but I trust that all sufficiently good ideas will make it in eventually.

I am thinking not of a currency-like application but on a replicated directory service kind of application. Since this is written from scratch, there is the chance to try different PoW systems without having to break the old algorithm (or mind sets of developers).

Thanx again for challenging my thoughts in the discussion. This is very fruitful.