Here's an interesting thought experiment...
The only real problem with that logic is we need to store the transactions in a block after they happen and blocks have an upper limit on size. 3000 tx per second that would mean we have 450000.0 transactions in a block. I'm getting about 107MB per block with some rough math. If the network requires about 4x the blocksize in bandwidth, we could possibly need about 100MB a minute.
To get 100MB a minute consistently we could have a 3rd tier that lives on a high performance network (maybe a specific zone of AWS for high performance computing). If you want to use our high performance network, you would switch through a 2-way peg to this second chain, which has pretty much infinite speed. We could actually have a bunch of these, all on separate networks throughout the world, so that if one is having problems, there's always redundancy.
Hmm...
The masternode network + quorums is quite powerful for solving these types of problems.
Question; Would that be 100 mb a minute be the same size on the blockchain? How would we store such a behemoth? I mean seriously, no matter how you shake it, blockchain size is going to have to be dealt with some day, and somewhere, I think most would agree, the whole thing must be stored in it's entirety, decentralized. This will be a problem for everyone. What do you think needs to be done?
I can see chopping it up and storing sections of it, redundantly, amongst groups of nodes, perhaps in a compressed format, digitally signed and archived (again redundantly) with the "working" chain being a trimmed version. What other options might there be? Thanks for your opinion!