The level of hostile emotional appeal and logical fallacy in this thread is astounding.
Why is it so difficult for a group of intelligent people to discuss ideas on their own merit?
there is no tax on being respectful.
With respect, you are unaware of the level of contempt that the BIP101 pushers are showing for anyone who disillusions them of their idea. These people aren't attempting a reasonable discussion in the first instance, and so it is only appropriate to match their zeal with opprobrium.
Of course BIP101 is a technical proposal to be discussed on it's merits; it has been dissembled, disliked, and then ruled out already. But this very vocal minority doesn't seem to want to go away gracefully.
I may have missed the contempt, but I assure you I've read as much as I can find on this subject.
It's not overt at all, this is a sociopathic form of contempt. I'll use your next reply to explain...
To date i'm not fully convinced of any argument. However I've seen very little evidence that leads me to think using the 1MB limit a good idea. It appears to be adding a new variable (a hard limit) where we have not previously needed one. I see one argument for keeping it and that is to prevent 'centralization' although nowhere is this term quantified. Further, it also introduces a new centralization vector that previously didn't exist. ( namely increased fees.).
If you are going to argue keeping something reduces centralization, whilst introducing something else that increases centralization. The whole thesis breaks down, as it is self contradicting.
Now that keeping 1MB is ruled out, that leads to the conclusion; we must have a hard fork.
Great stuff. What you've stated there is the premise of the debate, namely:
- 1MB is too small going forward, regardless of the overlay networks that might relieve that
- Changing that limit is part of the consensus rules, so hard fork needed
That's where everyone is starting the debate from.
Except the people arguing for BIP101. They have a highly consistent habit of labelling everyone who doesn't like it as "1MB'ers". It's a dishonest tactic, as it serves as an attempt to distort their opponents arguments in the most extreme way they can. They then use that unreality as the basis to argue against these imaginary positions.
Therefore if you put me on the spot today, i'd say removing the blocksize limit altogether ( in a gradual and predictable manner) Is the best solution.
Reading the paper by Peter R will help with the leap of faith, required here to trust in the free market. In my humble opinion that's what this whole debate boils down to. Attempting to control a vast and complex system of variables or simply letting supply and demand of the free market do what it does best.
This idea sounds tempting, but it isn't a good idea.
The supply and demand of blocksizes to the network doesn't work well with arbitrary limits, as you've identified yourself. But it also does not work if you remove the limit altogether; there exists an incentive to game the system. Creating algorithmic rules on the network to govern re-sizing will essentially fulfil what you're advocating; no prescribed limit. Scheduled caps (such as the scheme that Peter R advocates in his paper that you like) will not create that free market; the caps and the incentive to abuse them will still be there.