It's pretty curious that I am talking about how we need to increase bitcoin block size while others come up with a solution to decrease the amount of storage we need to occupy to support bitcoin.
Do you have full archival node? Because if you do, then I wonder how many times it crashed, and you had to reindex it. Because for example copying 500 GB of data from one disk to another is not a big deal. The bigger problem is verification time, where you can spend a week on reindexing the chain, and rebuilding the database from scratch, even if all blocks are already downloaded.
I think we should step up, hardware aren't that expensive today as they were back ten years ago.
I also wonder, how many times you tried to verify the chain. Because CPU speed is not much better than ten years ago. When it comes to CPU speed, you can for example see, that CPU-based mining is still quite slow, which means, if you want to mine blocks on the lowest difficulty, it is not that much better, than it was ten years ago. If you look at signet, you see that the base difficulty is even lower than on mainnet!
And now, I made a copy of the whole chain on my 4 TB external disk, and I spend next hours on getting it reindexed, to the point where it were before latest crash. Guess what: copying the whole chain was quite fast, but verification is still ongoing, it was started on Monday this week, and I hope it will finish before next Monday, without crashing. But I restart the client regularly, and refresh my copy, to not start reindexing from 2009 again. Guess what: reindexing is much slower than refreshing my backup.