Post
Topic
Board Development & Technical Discussion
Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
by
solex
on 08/10/2014, 09:24:16 UTC
My concern is that there is little room for error with geometric growth.  Lets say that things are happily humming along with bandwidth and block size both increasing by 50% per year.  Then a decade goes by where bandwidth only increases by 30% per year.  In that decade block size grew to 5767% while bandwith grew to 1379%.  So now peoples connections are only 24% as capable of handling the blockchain.

Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.

Compression techniques (e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  

At the moment the 1MB in checkblock is agnostic as to how the blocks are received.  

Code:
   // Size limits
    if (block.vtx.empty() || block.vtx.size() > MAX_BLOCK_SIZE || ::GetSerializeSize(block, SER_NETWORK, PROTOCOL_VERSION) > MAX_BLOCK_SIZE)
        return state.DoS(100, error("CheckBlock() : size limits failed"),
                         REJECT_INVALID, "bad-blk-length");

Consider that bandwidth is the constraint and disk space, perhaps 10x less so. This implies that a 1MB block maximum for transmitted blocks should be reflected as a 10MB maximum for old blocks read from / written to disk (especially when node bootstrapping is enhanced by headers-first and an available utxo set).

Put another way, a newly mined block of 2MB might be transmitted across the network in a compressed form, perhaps of only 200KB, but it will get rejected, yet it should be acceptable as it is within currently accepted resource constraints.