Post
Topic
Board Development & Technical Discussion
Re: How a floating blocksize limit inevitably leads towards centralization
by
Peter Todd
on 18/02/2013, 20:35:12 UTC
In the absence of a block size cap miners can be supported using network assurance contracts. It's a standard way to fund public goods, which network security is, so I am not convinced by that argument.

Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.

Perhaps I've been warped by working at Google so long but 100,000 transactions per second just feels totally inconsequential. At 100x the volume of PayPal each node would need to be a single machine and not even a very powerful one. So there's absolutely no chance of Bitcoin turning into a PayPal equivalent even if we stop optimizing the software tomorrow.

But we're not going to stop optimizing the software. Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.

I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem. You're 100x the volume of PayPal is 4000 transactions a second, or about 1.2MiB/second, and you'll want to be able to burst quite a bit higher than that to keep your orphan rate down when new blocks come in. Like it or not that's well beyond what most internet connections in most of the world can handle, both in sustained speed and in quota. (that's 3TiB/month) Again, P2Pool will look a heck of a lot less attractive.

You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory. That's an ugly, ugly requirement - after all if a block has n transactions, your average access time per transaction must be limited to 10minutes/n to even just keep up.

EDIT: also, it occurs me me that one of the worst things about the UTXO set is the continually increasing overhead it implies. You'll probably be lucky if cost/op/s scales by even something as good as log(n) due to physical limits, so you'll gradually be adding more and more expensive constantly on-line hardware for less and less value. All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing. In addition your determinism goes down because inevitably the UTXO set will be striped across multiple storage devices, so at worst every tx turns out to be behind one low-bandwidth connection. God help you if an attacker figures out a way to find the worst sub-set to pick. UTXO proofs can help a bit - a transaction would include it's own proof that it is in the UTXO set for each txin - but that's a lot of big scary changes with consensus-sensitive implications.

Again, keeping blocks small means that scaling mistakes, like the stuff Sergio keeps on finding, are far less likely to turn into major problems.

The cost of a Bitcoin transaction is just absurdly low and will continue to fall in future. It's like nothing at all. Saying Bitcoin is going to get centralized because of high transaction rates is kinda like saying in 1993 that the web can't possibly scale because if everyone used it web servers would fall over and die. Well yes, they would have done, in 1993. But not everyone started using the web overnight and by the time they did, important web sites were all using hardware load balancers and multiple data centers and it was STILL cheap enough that Wikipedia - one of the worlds top websites - could run entirely off donations.

You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted. Meanwhile unlike Wikipedia Bitcoin requires global shared state that must be visible to, and mutable by, every client. Comparing the two ignores some really basic computer science that was very well understood even when the early internet was created in the 70's.