Post
Topic
Board Speculation
Re: Wall Observer BTC/USD - Bitcoin price movement tracking & discussion
by
AZwarel
on 17/01/2016, 22:35:54 UTC
Why would be only 5-10, or only 500 nodes? There is no factual data to support that claim at all.

Everyone can have a p2p client that deals with 10kb/sec, has 100 mb storage requirements, needs 1 small cpu and 256mb ram. As you go up and up in hw requirements, cost goes up, "volunteers" go down - even if userbase goes up. As you hit datacenter level requirements the number of "volunteers" starts dropping significantly because costs start running in the 5 digit, then 6 digit category and eventually you'll be paying millions.

Everyone can, and yet, they don't. Also, this isn't USSR, everyone can not have the same capacities, yet 100 million people have 1GB/sec connection with nTB storage capacity today, so we have a lot of room to improve. Also, it might cost 6 digits money to run a datacenter TODAY. In ten years, it easily could be 3-4 digits, as it happened multiple times before.

You can not selectively extrapolate present->future data pairs: if p2p client requirements increase 100x fold in the future you also have to speculate future capacity increases/ cost decreases as well!

Of course, i agree that scaling optimally should follow the speed of capacity increase, aka technical constrains on economy of scale. The point i disagree is that the technical side also will and is increasing, making it possible to scale up, for example blocksize right now.