Bitcoin Cash is not going to have a lot of full nodes if things go their way. In order to scale anywhere close to Visa, on chain, as appears to be their goal, a enterprise would need access to equipment capable of storing at least 56 TB of data per year. Probably at least quadruple that, since you want to properly index the data and have proper back ups. A casual user would need to run an SPV client.
For the umpteenth time - Bitcoin Cash has no reason to support Visa-scale transaction volume until such time as demand for transactions reaches Visa scale (is that a tautology?). If we somehow hit that next year, I'll be more than happy to store 56 TB of data that year. Of course, even in the most wildly optimistic scenario, Visa-scale demand is still several years off.
Perhaps you need to discuss this with
Craig Wright on Twitter then. It appears nChain is working hard to bring this capacity to reality in 2018, needed or not.
Capacity is not consumption.
After watching this video:
https://www.youtube.com/watch?v=5SJm2ep3X_M&feature=youtu.beI have come to the conclusion that on-chain scaling is indeed feasible. My previous objections were that it takes pools too long to verify transactions. Therefore, 1GB blocks would result in many empty blocks being mined.
I see no scenario where that would be a consequence. Can you explain? For that matter, how does one quantify 'too many empty blocks'? As long as all transactions are processed with reasonable alacrity, what does the block fullness matter?
Currently, there is a bottleneck in the way nodes need to verify transaction included in the previous block before they start including transactions in the new block they are working on. This is why empty blocks are frequently mined if they arrive within about 30 seconds of the previous block. 1GB blocks would compound this bottleneck 1000x. However, nChain is working on ways to relieve the bottleneck. Therefore, my objection would be thwarted.
Again, I fail to see why this is a problem.
I actually had a significant discussion about a year ago with the author of the leading independent mining SW about this very issue. Though the motivating factor was the nonsense about nonlinear hash time for aberrant transactions. Yes, there is a significant opportunity to improve the threading model within leading Bitcoin clients.
However, I don't see how your reply addresses my question. As long as all transactions are processed with reasonable alacrity, what does the block fullness matter?
However, I still have misgivings about 1 GB blocks since this would require a node to have the storage capacity of over 50tb per year.
Other than to repeat that we will not consume that much storage until there is actual demand for it, I would like to point out the 50 TB of storage -- today -- costs less than 0.1 BTC (~$1650 USD at standard retail).
At this time, I believe this would cause a barrier to entry for small start ups that require a full node be run. (New exchanges, information services, mining pools etc.)
If a startup can't lose 0.1 BTC in the noise of its annual CapEx budget, it is likely to be a non-entity anyhow.
According to Craig Wright, the initial setup would be $20,000. If I wanted to start a block explorer or a sight like fork.lol, that's quite an outlay, especially since it extremely difficult to be able monetize such an informational sight, to make up the costs. I enjoy these information sights and don't deem them "non-entities."
I have heard Craig Wright speak of the $20K figure. But I have not heard him speak of it as a minimum viable investment - merely exemplary. More importantly, $20K (I note that this is a depreciating asset with a useful life exceeding one year) is a drop in the bucket for any real business. Crippling Bitcoin in order to save paying a $5/year subscription to access fork.lol's data is the wrong decision.