Ever since
BIP106 was first proposed, I've been a fan of the idea of dynamic scaling. Although shortly after that, I decided that the original concept was far too unrestrictive and had the capability to result in dangerously large size increases if it was abused. So over time, I've been looking at different tweaks and adjustments, partly to curtail any excessive increases, but also to incorporate SegWit, limit the potential for gaming the system and even prevent dramatic swings in fee pressure. So far,
that's where I've got to. Still hoping some coders will take an interest and get it to the next level where it might actually be practical to implement.
My opinion is that an open cap is too unrestrictive, but a solid cap is too restrictive. That's why I think we need a way for the network to raise the cap on it's own within a set of limitations so that it can't bloat.
EDIT: I took a look at your thread, which looks similar to the first part of what I suggested here, but what's to stop the block size from increasing too fast for the network to handle? I think dynamic scaling needs to focus on both the needs of the network as well as the capability of the network with two distinct scaling solutions used together. The first adjusts the network according to the needs of the network, and the second adjust the network according to the capabilities of the network. Together, it allows flexibility, but there would have to be some incentive for the node operators to be willing to expand to handle the increased traffic.
And therein lies the rub, there's currently no way to forecast or determine the limits when it comes to the capability of the network, other than asking nodes directly to set their own individual preferred caps. Somehow I doubt many people on these boards would consider implementing ideas borrowed from Bitcoin Unlimited.
Joking aside, combining algorithmic blockweight adjustments with the allowance of each node to set an upper limit to the size they'd be willing to accept
would work if it weren't for the fact that it could easily force nodes off the network in a hardfork if they set their own personal limit lower than that of the majority of other participants, so even if people were willing to take ideas from BU, it still has some pretty serious shortcomings. If anyone can come up with a solution to that conundrum, I'm eager to hear it. Until then, it's a sticking point. Which is why all I could really do is make any increases as small as possible and allow the network to undo the increase if and when the demand isn't there anymore.
In essence, any attempt to place any hard upper limit inevitably results in hardforks at some point in future, unless you're an absolute genius and manage to find a workaround or hack to implement it as an opt-in soft fork.
Please stop wasting your time (and everyone elses): learn how the Lightning concept works first, then start talking again.
Please don't respond to threads with a condescending attitude, especially with inaccurate information, lest you be seen as trolling. That last like was unnecessary and detracts from the conversation flow. Even if I was wrong (and I wasn't) there's nothing wrong with being wrong and being corrected.
If you dare to discuss on-chain scaling, Carlton will jump down your throat for even the slightest perceived transgression. Not that I'm excusing the behaviour, it's just that I can't imagine it changing any time soon. It just seems to be the way of things. Don't let it put you off.