Finally a reasonable question:
Gavin, could you please explain the reason behind trying to guess and hard-code the optimal limit, instead of doing something adaptive, like "limit = median(size of last N blocks) * 10"?
The problem people are worried about if the maximum block size is too high: That big miners with high-bandwidth, high-CPU machines will drive out either small miners or I-want-to-run-a-full-node-at-home people by producing blocks too large for them to download or verify quickly.
An adaptive limit could be set so that some minority of miners can 'veto' block size increases; that'd be fine with me.
But it wouldn't help with "I want to be able to run a full node from my home computer / network connection." Does anybody actually care about that? Satoshi didn't, his vision was home users running SPV nodes and full nodes being hosted in datacenters.
I haven't looked at the numbers, but I'd bet the number of personal computers in homes is declining or will soon be declining-- being replaced by smartphones and tablets. So I'd be happy to drop the "must be able to run at home" requirement and just go with an adaptive algorithm. Doing both is also possible, of course, but I don't like extra complexity if it can be helped.
It is hard to tease out which problem people care about, because most people haven't thought much about the block size and confuse the current pain of downloading the chain initially (pretty easily fixed by getting the current UTXO set from somebody), the current pain of dedicating tens of gigabytes of disk space to the chain (fixed by pruning old, spent blocks and transactions), and slow block propagation times (fixed by improving the code and p2p protocol).
PS: my apologies to davout for misremembering his testnet work.
PPS: I am always open to well-thought-out alternative ideas. If there is a simple, well-thought-out proposal for an adaptive blocksize increase, please point me to it.
I don't think we are going to see the number of PCs in homes go down anytime soon... the trend is upwards. What you will see is those without PCs now increasing the numbers of lightweight PCs, such as tablets, phones, etc... in their homes as their primary means of computing. I don't think you can use the argument that the number of computers capable of running a full node will be declining, because if it ever does, it won't be any time soon. We will have a problem with the block limit long before that is even a minor disturbance in the force. For the record, I agree with you completely that we need to increase the block size limit, I just don't agree that the number of PCs is a valid reason. The adaptive algorithm is good, but it seems that it could potentially be exploited given enough resources, but it might be the lesser of two evils at the moment.
Satoshi also envisioned ASICs dominating the network and being run in a more centralized manner, but many people forget that as well. Centralization is not necessarily a bad thing, depending on the circumstances and if that centralization can be counterbalanced with the ability to quickly and decisively eliminate the negative forces of centralization, then I think that outweighs the risks.