Post
Topic
Board Development & Technical Discussion
Re: Blocks are [not] full. What's the plan?
by
Saturn7
on 01/12/2013, 15:58:55 UTC
Difficulty adjustment already provides a mechanism to adjust a variable value with consensus. Why not just treat block size the same?
For example if the average size of the last 2016 blocks in 80% full then the block size would double.

In the last 2016 blocks, or in the 2016 blocks which make up the previous difficulty calculation? (I think the latter would probably be a better choice.)

What, if anything, is the mechanism to shrink the blocks back down again? (Halve if the average size of the last 2016 blocks is 20% full, with a hard minimum of 1 meg?)

I suspect this might be vulnerable to blockchain-forking attacks which near-simultaneously release very differently sized blocks, but it's hard to say without a full specification.

Depending on your answer to the second question, it also might increase the incentives for miners to release blocks with as few transactions as possible.

It also generally makes the design of mining software more complicated and thus more vulnerable to attack. Being able to statically allocate the size of a block is a definite advantage, though I don't know off hand how the reference implementation handles this. I'd say some hard maximum is necessary, even if it's ridiculously huge. But then what's the advantage of not just setting the maximum at whatever that hard maximum is?

In the end this might be viable, but I'd want a lot more details.

I would say the 2016 blocks which make up the previous difficulty calculation.

I don't think it should shrink, there may be periods where blocks are not fully utilised but if that became an ongoing trend it would only mean people stopped using bitcoin.

I would say there are less risks in slowly growing the block size over time then just not having a limit at all (even if there was a large hypothetical limit). We also need to consider network propagation time. If out of the blue we had a 1 gigabyte block would all the clients globally have this data in ~10 minutes (about 6 minutes when the network hash rate grows)?