Then why does anyone give a damn about blockchain bloat? It seems like that system would still work fine even when the blockchain is 50,000TB. I am sure we'd still have 10 full nodes even with a HUGE blockchain.
People who are concerned about "blockchain bloat" really don't understand the technical details of bitcoin. They try to run Bitcoin Core without pruning and then complain that they've run out of disk space. They complain that it won't be possible for enough people to run a "FULL" node.
Note that they are at least partly correct. As I mentioned, you'll always need some nodes to store the entire history. You initially called them "Special Archive Nodes". They are currently called "Full Nodes". The point is, those same people would complain about "bloat" in your solution since (in their opinion) not enough people would be able to run "Special Archive Nodes".
The bigger issue with larger block sizes isn't the size of the blockchain, it's the size of the individual blocks themselves. Before a miner can reliably build a new block on top of a recently solved block in the chain, they need to verify that the recently solved block is valid. This means they have to take the time to download the entire block, and verify the block header and every transaction in the block. Those miners or pools with slower internet connections or slower processors will be at a significant disadvantage over those with extremely high speed connections and high end processors. This disadvantage could result in the higher end miners and pools getting a head start on the next block and solving more blocks than the disadvantaged miners and pools. Eventually the loss of revenue could result in mining power consolidating in just the few miners or pools with access to the fastest possible internet connections and newest processors.
It also may become very difficult for those on very slow connections to download and verify a single block before the next block is solved. As such, even lightweight wallets and pruning nodes would continuously fall farther and farther behind on synchronization and never be able to catch up.
The unanswered question is how big is too big when it comes to blocks? Right now we are operating with 1 MB. Is 2 MB too big? How about 10 MB? 100 MB? 1 GB? 1 TB? Who gets to choose, and how do we get EVERYONE to agree on that choice?