Post
Topic
Board Development & Technical Discussion
Re: How a floating blocksize limit inevitably leads towards centralization
by
Realpra
on 27/02/2013, 17:55:28 UTC

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.


Searching the forum for "swarm client" begets nothing.  Link?
https://bitcointalk.org/index.php?topic=87763.0
(Second search link Tongue)

Quote
I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks.  That or I simply didn't understand it. 
The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.

To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.
This history cannot be faked due to the nature of the blocks tree-data-structure.

Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.

There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.