Which is fine. I get that. But 'I really don't quite understand yet how this is going to work, exactly' is kind of hard to square with 'The solution to Scalability'. If one does not understand all the considerations, how is one able to meaningfully advocate it as any kind of solution?
It is a kind of "return punch" to those BIP101 and XT propaganda posters. Obviously this seems like a very good solution for short term which is going to buy the developers enough time to work out others solutions. In addition to increasing the capacity this proposal has additional benefits which make it even better. These are the things that we need; i.e. better infrastructure and not just changing the block size to random numbers hoping for them to be the right call.
-snip-
Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.
You should not be asking that here. You should ask it somewhere where it is very likely that the developers are going to see and answer. Apparently it has been tested for 6 months, so I'm pretty sure that they know those potential attack vectors (or some). Besides, they aren't going to rush this. It should be available on testnet this month IIRC.