So, if we really need a block size limit, and if we also need it to scale, why not making such limit so that it adjusts itself to the transaction rate, as the difficulty of generation adjust itself to the generation rate?
Some of the smart guys in this forum could come up with an adjustment formula, taking in consideration the total size of all transactions in the latest X blocks, and calculating which should be the block size limit for the next X blocks. Just like the difficulty factor.This way we avoid this "dangerous" constant in the protocol.
One of the things the smart guys would have to decide is how rigorous will the adjustment be. Should the adjustment be done in order to always leave enough room to all transactions in the next block, or should blocks be "tight" enough to make sure that some transactions will have to wait, thus pushing up the transaction fees?
Okay, I do realize that it would allow flooders to slowly increase the limit, but, what for? As long as generators aren't accepting 0-fee transactions, a flooder would have to pay to perform his attack.
So, what do you think?
I also think this is a useful idea to follow up on. In this case, it might be nice to have a "floating average" of say the previous 2000 blocks plus some constant or constant percentage. I'm suggesting perhaps the mean + 50%, to give some flexibility for it to expand? We could quibble over the exact amount for the expansion room (perhaps allow it to 100% or 200% of the mean) but some sort of limit certainly sounds like a good idea and is something very easy and quick to compute. It could also be calculated independently by all of the clients quickly to accept or reject a particular block.
A genuine and sustained increase in transactions over the long duration would be accepted into the network at the increased rate and wouldn't put too much "push back" as the network adjusts to the new level.
Besides, we can run the current chain through the algorithm (whatever we come up with) and see how the network would have adjusted based on "real world" data. It might be fun to see just how that would have worked out, too.