Post
Topic
Board Bitcoin Discussion
Re: Bitcoin Block Size Conflict Ends With Latest Update
by
yayayo
on 23/06/2015, 00:19:56 UTC
History indicates otherwise. Nielsen's Law of Internet Bandwidth http://www.nngroup.com/articles/law-of-bandwidth/ 50% per year compounded so a doubling every two years is actually below that.

Edit: 1.5^20 > 3325 vs 2^10 = 1024

The link you provided cites advertised plans boasting hypothetical peak bandwidth possibilities and not real life bandwidth averages.

Additionally, this doesn't address all the concerns:

1) Latency caused by larger blocks incentivizes the centralization of mining pools
2) Not everyone worldwide lives in locations which has bandwidth growing at the same rates
3) Advertised bandwidth rates are not the same as real world bandwidth rates
4) ISPs often put soft caps on total bandwidth used on accounts and stunting the user speed to a crawl.  More are no longer advertising unlimited bandwidth per month and setting clear total amount transferable limits and hardcaps with overage charges.
5) Full nodes at home need to compete with the bandwidth needs of HD video streaming used by most users which is getting increasingly demanding. Most people don't want to expend most of their bandwidth on supporting a full node and stop streaming Netflix and or torrenting.
6) Supporting nodes over TOR is a concern

Lets look at the historical account of real world bandwidth averages -

http://explorer.netindex.com/maps?country=United%20States

1/2008      5.86 Mbps
12/2008    7.05 Mbps
12/2009    9.42  Mbps
12/2010    10.03  Mbps
12/2011    12.36   Mbps
12/2012    15.4   Mbps
12/2013    20.62   Mbps
12/2014    31.94  Mbps

Thus you can see that even if I were to ignore many parts of the world where internet isn't scaling as fast and focus on the "first world", bandwidth speeds aren't scaling up as quickly as you suggest.

You have to take into account that these measurements are download speeds. However what's the real bottleneck are upload speeds, which are far below that.

Also these measurements are short-term and for direct ISP-connectivity only (this is on what Nielson's observations are based on). These measurements and growth rates are in no way applicable to a decentralized multi-node network. In addition most ISPs have explicit or implicit data transfer limitations - they will throttle down your connection if you exceed a certain transfer volume.

So I fully agree with your assessment that Hearn's and Gavin's plan is horrible - in fact, it's even more horrible than you've shown.
There's no way I will ever support such a fork.

ya.ya.yo!