Post
Topic
Board Development & Technical Discussion
Re: Increasing the block size is a good idea; 50%/year is probably too aggressive
by
trout
on 15/10/2014, 19:39:13 UTC
Of course one can say, let's put it 50% per year until the bandwidth stops growing that fast,
and then we fork again. But this only postpones the problem.  Trying to predict now  exactly when this happens, and to  program for it now, seems futile.

Okey dokey.  My latest straw-man proposal is 40% per year growth for 20 years. That seems like a reasonable compromise based on current conditions and trends.

You seem to be looking hard for reasons not to grow the block size-- for example, yes, CPU clock speed growth has stopped. But number of cores put onto a chip continues to grow, so Moore's Law continues.  (and the reference implementation already uses as many cores as you have to validate transactions)

Actually, I'm not looking for reasons not to grow the block size: I  suggested sub-exponential growth instead, like, for example, quadratic (that was a serious suggestion).

About  the 40% over the 20 years - what if you overshoot, by, say, 10 years?
And as a result of 40% growth over the 10 extra years the max block size grows so much
that it's effectively infinite? ( 1.4^10 ~ 30). The point being, with an exponent it's too easy to
overshoot. Then if you want to solve the resulting problem by another fork, it may be much
harder to reach a consensus, since the problem will be of a very different nature (too much centralization
vs too expensive transactions).