In order to validate tx quickly the UXTO needs to be in memory. So what happens when the UXTO is 32GB? 64GB? 200GB? Now if those are "valid" outputs likely to be used in future tx well that is just the cost of being a full node. But when 50%, 70%, 95%+ of the outputs are just unspendable garbage
...they'll get pushed to swap space along with all the other memory pages that haven't been accessed for a while? We expect caching algorithms and virtual memory to still be a thing in the future, right?