I do have to add that, while I think that it would be still extremely hard to gather 90-95% consensus on both ideas, I think both would reach far higher and easier support than either Segwit or BU.
I don't understand that statement. Are you talking about DooMAD's idea (modified BIP100+BIP106) or the compromise proposed by "ecafyelims", or both?
Both.
I ask because I think DooMAD's "10%-blocksize-change-voting proposal" sounds interesting and if there is support by staff/respected community members/devs then it would be worth discussing it in a separate thread to elaborate a "final BIP".
The idea is worth discussing on its own, regardless of whether there is support by others. Do note that "support by staff" (if you're referring to Bitcointalk staff) is useless. Excluding achow101 and potentially dabs, the rest have very limited or just standard knowledge. Did you take a look at recent luke-jr HF proposal? Achow101 modified it by removing the initial size reduction. Read:
https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-February/013544.htmlif blocks grow to say 8mb we just keep tx sigops BELOW 16,000 (we dont increase tx sigop limits when block limits rise).. thus no problem.
That's not how this works.
https://bitco.in/forum/threads/buip033-passed-parallel-validation.1545/For those unwilling to click through:
BUIP033: Parallel Validation
Proposer: Peter Tschipper
Submitted on: 10/22/2016
Summary:
Essentially Parallel Validation is a simple concept. Rather than validating each block within the main processing thread, we instead create a separate thread to do the block validation. If more than one block arrives to be processed then we create yet another thread. There are currently up to 4 parallel block processing threads available making a big block DDOS attack impossible. Furthermore, if any attacker were somehow able to jam all 4 processing threads and another block arrived, then the processing for the largest block would be interrupted allowing the smaller block to proceed, unless the larger block or blocks have most proof of work. So only the most proof of work and smallest blocks will be allowed to finish in such
as case.
If there are multiple blocks processing at the same time, when one of the blocks wins the race to complete, then the other threads of processing are interrupted and the winner will be able to update the UTXO and advance the chain tip. Although the other blocks that were interrupted will still be stored on disk in the event of a re-org.
Which effectively.. solves nothing.