Post
Topic
Board Development & Technical Discussion
Re: How a floating blocksize limit inevitably leads towards centralization
by
markm
on 23/02/2013, 06:31:39 UTC
Fine then, lets look at how much it will cost to do specific upgrade scenarios, since all costs ultimately end up coming out of the pockets of users. (Right? Users ultimately pay for everything?)

One scenario is, we upgrade absolutely every full node that currently runs 24/7.

Already we are driving out all the people who like to freeload on the resources of decentralised networks without giving back. They fire up a node, download the movie they want or do the bitcoin payment they want - in general, get something they want from the system - then they shut down the app so that their machine doesn't participate in providing the service.

So for a totally ballpark, making handwaving assumptions back of a napkin guess, lets guess maybe ten times the block size would mean ten times the current costs of hardware and bandwidth. "We" (the users) provide upgrades to all the current 24/7 full nodes, increase the max block size by ten and we're done.

Of course the beancounters will point out "we" can save a lot of cost by building one global datacentre in Iceland, increasing the max block size to oh heck why quibble over how many gigabytes lets just do this right and say at least a terrabyte. Current nodes are just the fossils of a dead era, "we" do this once, do it right, and we're done. For redundance maybe "we" build actually seven centres, one per continent, instead of just one wherever electricity and cooling is cheapest.


Two details:

One, is the fixed cost really directly linear to the max block size? Or is it really more like exponential?

Two, experiment with various s/we/someoneelse/ substitutions? (Google? The government? Big Brother? Joe Monopolist? The Russian Mob? Etc.)

(I thought I had two details, but come time to type the second couldn't remember what I had had in mind, so made up the above detail Two as a spacefiller.)

-MarkM-