Post
Topic
Board Development & Technical Discussion
Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
by
NewLiberty
on 13/10/2014, 08:58:44 UTC
(e.g. by using transaction hashes and/or IBLT), once implemented, will certainly keep the new block message size growth rate much lower than the bandwidth growth rate.  
Keep in mind these techniques don't reduce the amount of data that needs to be sent (except, at most, by a factor of two). They reduce the amount of latency critical data. Keeping up with the blockchain still requires transferring and verifying all the data.

Quote
Not a big deal?  Well, except that we can expect the power of nodes to follow some sort of curve ("exponential" in the vernacular) such that most nodes are barely above the threshold to be viable.  Meaning that this event would mean that the majority of nodes would shut down, likely permanently.
Right. There is a decentralization trade-off at the margin.  But this isn't scaleless-- there is _some_ level, even some level of growth which presents little to no hazard even way down the margin.   The a a soft stewardship goal (not a system rule, since it can't be) the commitment should be that the system should be run so that it fits into an acceptable portion of common residential broadband, so that the system does not become dependant on centralized entities. As some have pointed out, being decenteralized is Bitcoin's major (and perhaps only) strong competitive advantage compared to traditional currencies and payment systems. How to meet that goal best is debatable in the specifics.

At the moment there are a bunch of silly low hanging fruit that make running a node more costly than it needs to be; we're even at a case where some people developing on Bitcoin core have told me they've stopped running a node at home. It's hard to reason about the wisdom of these things while the system is still being held back by some warts we've long known how to correct and are in the process of correcting.

It doesn't make sense to guess at this.  Any guess is bound to be wrong.
If after picking the low hanging fruit, there is still an issue here (and there may be).
It ought not be resolved by a guess when there is data within the block chain that would be useful for making a determination on max block size.
In the same way that difficulty adjustment is sensitive to data within the block chain, so also this could be.

I don't know what the right answer is anymore than Gavin does, but making an estimation would not be the best way to solve this in any case.

.....
One example of a better way would be to use a sliding window of x number of blocks 100+ deep and basing max allowed size on some percentage over the average while dropping anomalous outliers from that calculation.  Using some method that is sensitive to the reality as it may exist in the unpredictable future give some assurance that we won't just be changing this whenever circumstances change.
Do it right, do it once.

There isn't a way to predict what networks will look like in the future, other than to use the data of the future to do just that.