From Satoshi Nakamoto whitepaper :
If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems
typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of
1.2GB per year, storage should not be a problem even if the block headers must be kept in
memory
At BestBuy : "Excuse me sir, i need to get new RAM , I need hmm ... 240GB of RAM please"
And you could get a EC2 instance with 240GB of RAM (with 48TB of HDD space) as of Nov 2012[1], if needed.
You can configure machines like that today with 256GB of RAM (m4.16xlarge for example[2]), less expensively than 4 years go. You can even get a 976GB RAM or
1952GB RAM (with about double that in disk storage) today - an x1.16xlarge or x1.32xlarge EC2 instance. So while 240GB seems like a lot when one is used to using a laptop with 16GB, in reality it isn't out of the ballpark for many needs.
Kind of like 64K seemed like a lot in 1981. Or 1MB seemed like a lot in 1985. etc.
1.
https://techcrunch.com/2012/11/29/amazon-announces-2-new-ec2-instance-types-cluster-high-memory-with-240gb-ram-and-high-storage-with-48tb-hdd-space/2.
https://aws.amazon.com/ec2/instance-types/