That if your home computer had to do 2.25TB/day down to be a full node you might not be able to do it, which is a centralising factor.
Adam
Lets put problems in perspective, when I'm faced with the problem of managing 2.2TB/day in Bitcoin tx's the value stored in my 10 BTC that I've held onto all this time will take my hobby to a whole new level.
Building that data storage system would be a labour of love many will do it just to make sure there 10 BTC are secure.
Exactly.
And, the demand to support 100k TPS will not happen anytime soon, maybe two or three decades from now. I remember, about 1990, buying 30MB hard disk drives and thinking they held a lot of data. 2TB is common today, and by the 2030s high density optical data storage could be standard:
In this paper, we present a review of the recent advancements in nanophotonics-enabled optical storage techniques. Particularly, we offer our perspective of using them as optical storage arrays for next-generation exabyte data centers.
http://www.nature.com/lsa/journal/v3/n5/full/lsa201458a.htmlThe problem is not so much the storage. Bandwidth is where you'll run into a bottleneck
With that, IBLT can slash bandwidth requirements by at least 2 orders of magnitude. Only full node bootstrapping and re-sync will remain bandwidth intensive.