my favorite solution right now is to simply link the max size with difficulty in some fashion, that makes most sense to me.
This doesn't take scarcity into account and would require an oracle to provide the constants for the necessary formula linking size to difficulty. It's easy to see the case where difficulty outpaces transaction volume; We're about to see that now with ASICs coming online. Once the maximum block size is sufficiently large so that all pending transactions can fit, now we're back to the case where there's no limit and fees trend to zero. Hopefully this example should kill off any ideas about tying block size to difficulty.
I'll repost the scheme I described
elsewhere. It uses only information found in the block chain, and should be resistant to miners gaming the system. It increases the maximum block size based strictly on scarcity (preserving it). It doesn't depend on measurements of time or bandwidth. I'm not claiming this is the perfect system, but it provides some ideas to use as a starting point:
1) Block size adjustments happen at the same time that network difficulty adjusts
2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.
3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.
How high would such a hard limit be? Can we estimate how many transactions per second that, say, a one Gb hard limit could process?
Any adjustment to the maximum block size must preserve scarcity. The question is not how many transactions can be handled by a one gigabyte hard limit, but rather will a one gigabyte hard limit produce sufficient scarcity?