Problem being, once the blocksize is increased, there's effectively no going back. So I do understand Core's conservative blocksize philosophy.
That's only a problem if we're talking about a static blockweight, though. And for the life of me, I can't figure out why, as a community, we still think in such limited, primitive terms. If you make it
adjustable by algorithm, then it can either increase or decrease depending on set parameters. [...]
Thanks for the link! Seeing how there have been multiple proposals regarding dynamic increases of the maximum blocksize I've actually been wondering why there hasn't been any hardfork yet trying to implement one of them -- unless there is, I sort of lost track.
Nonetheless I'd probably just go for a straightforward static periodical block size increases every year or halving period instead of an algorithm based on network traffic. Assuming the latter can be gamed in one way or another (I still need to let your proposal sink in a bit, but I'm not yet fully convinced that it can't be exploited to force the maximum block size increase anyway) this would at least skip the extra step of trying to anticipate transaction workload.