Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.
Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.
I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem.
1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.
3T per month of transfer is again, not a big deal. For a whopping $75 per month bitvps.com will rent you a machine that has 5TB of bandwidth quota per month and 100mbit connectivity.
Lots of people can afford this. But by the time Bitcoin gets to that level of traffic, if it ever does, it might cost more like $75 a year.
You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.
How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.
All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.
Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.
You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.
Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.
Following your line of thinking, there should have been some way to ensure only the elite got to use the web. Otherwise how would it work? As it got too popular all the best websites would get overloaded and fall over. Disaster.
Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.
I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.