Ten times the block size seems like scarcity is banished far into the future in one huge jump.
Even just doubling it is a massive increase, especially while blocks are typically still far from full.
Thus to me it seems better never to more than double it in any one jump.
If relating those doublings to the halvings of block-subsidy it too slow a rate of increase then maybe use Moore's Law or thereabouts, increasing by 50% yearly or by 100% every eighteen months.
It is hard to feel like there is anywhere close to being a "need" for more space when I have never yet ever had to pay a fee to transfer bitcoins.
The rationale for the 10MB cap is that it would allow us to scale to PayPal tx level right away, and it's arguable that Bitcoin might not actually need more than that. The second rationale is that it would still allow running full nodes by regular people, thus retaining decentralization. Third rationale is that the issue of scarcity can actually be postponed because it won't be an issue for a long time. We're still in the era of large fixed block reward and we are very slowly moving into the "small fixed reward" era.
I have sort of started liking the idea that we would double the block size on each block halving though. The only problem with that is the fact that if the amount of Bitcoin transactions stop growing for some reason not related to this, but there is still very high value (even growing value) in the blockchain, it would lead to the blocksize rising without an increase in transactions. Thus it would lead to lessened protection for the network even though the value in the blockchain might still be very large or even growing.
This is a potential issue with a 10MB limit as well, but I have a hard time believing that. Bitcoin only needs to grow like 20 fold to start pushing the 10MB limit. Pushing it wouldn't be bad either, 70 tx/s should be enough for a lot of things. We could just let free transactions and super low fee transactions not get (fast) confirmations at that point. That is okay I think. The 7 tx/s cap that we have now is simply not going to be enough, that is pretty clear. It's too limiting.
However, I do agree that this whole issue is not something that we need to do now. The blocks do not currently have scarcity to speak of. This is all about creating a plan for what we're going to do in the future. The actual hard fork will happen earliest one year from now.
I'm not saying that 10x is the magical number. I'm saying that both mining and running a full client are still easily done at 10 meg blocks.
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.
At 1MB, you would need a ~1.7Mbps connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps
and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.
Also of importance is the fact that local bandwidth and international bandwidths can wary by large amounts. A 1Gbps connection in Singapore(
http://www.starhub.com/broadband/plan/maxinfinitysupreme.html) only gives you 100Mbps international bandwidth meaning you only have 100Mbps available for receiving mining blocks.
Since a couple people have thanked the author for posting this, I thought I should mention that only transaction hashes need to be sent in bursts. So a block of 1000 transactions (roughly 1MB) only requires 30KB of data to be sent in a burst, requiring a ~43Kbps connection to keep downloading time to 6s. 100MB blocks require ~4.3Mbps. The continuous downloading of transaction data is below these limits.
The full block download and verification isn't needed to start hashing the next block?