If this comes to fruition and the software is adaptible and robust enough we could easily contribute hundreds of Tbs of storage and probably 100Gbps of burst bandwidth.
But it needs to be robust and scalable.
We use tons of disks, each system has 2 disks, and they are sold on potential maximum disk usage basis, hence even smallest servers generally have 30%+ of disk space unused, large servers can run 80% empty. We have even nodes with 30Tb+ of disk space and just 4Tb used. For low activity data, that's perfect storage, as long as it's robust.
Ie. if users need more disk space available, we need to be able to autonomously, effortlessly and fast go from say 2Tb to the network to 1Tb to the network on a single node.
If that can happen and esp. if built in, and the software works stable on Linux, we could put it in 50 servers just to test out.
If it actually makes a positive return financially, we could eventually put up petabytes of storage only dedicated to the network.
NOW, as a data specialist, you need at least replication in any case, in a network like this probably 3+ replicas. Better yet, since CPU and RAM is going to be abound and no data on this network is going to be need to have tons of iops and real time RW access, built in erasure coding would be much better. Dedup is a must.
For example, a 1Gb file, split it into 30+5, 30 data pieces, 5 redundancy pieces.
It could be hard coded, say 128Mb pieces for files larger than 1Gb, 32Mb pieces for 500Mb-1Gig files, and smaller ones so that there will be always 16 pieces in multiplication of 4KiB, unless the file is really tiny (<1Mb perhaps?) then just replicated.
That will save storage a TON, increase reliability A lot.
With 32 pieces + 8 redundancy, 8 nodes can disappear simultaneously without loosing the data. That's in a network like this a bit risky still perhaps, maybe when the network matures it's not risky at all anymore since most nodes will be 24/7 servers eventually

Financially, to dedicate resource just for this, i need to see 3+/Mo per TiB storage and 2+ per TiB of uploaded data traffic. Market pricing for storage right now in dedis is about 7.5/TiB/Mo.
It's best to let the market decide the pricing.
The market deciding pricing is the most important bit: You cannot compete with Dropbox like pricing, then people will use Dropbox. But something which is free as in beer -> people will flock in and use it insanely much, putting pressure for prices to increase.
Apps are important too, for example, backing up servers this would be golden!

I would love to use something like this to backup our vast amounts of data if the price is sufficiently low, but in my case it's either or, if the price is extremely low, i will use the system myself, if the storage prices are high, i will be providing storage. We cannot justify doubling the cost per user account, except for limited portfolio of services.
Further to gain market adoption, things need to be simple. Ultra simple. Even a trained monkey can use simple.
But i like this. Looks awesome. Can't wait to see when testing begins!
