Post
Topic
Board Bitcoin Discussion
Re: Estranged Core Developer Gavin Andresen Finally Makes Sensible 2MB BIP Proposal!
by
madjules007
on 17/02/2016, 00:18:50 UTC
Does not compute:

You seem to be suggesting that miners, merchants, and exchanges are the only entities (or people...) running nodes.
My claim is that the network would operate just fine without any but merchants, miners, and exchanges running a node. Independents add little to no value to the system. Further, as you acknowledge, there is no economic incentive for an independent to run a node.

Quote
Would you agree that the entire economy depending on a single node for validation endangers security and fungibility?

Probably. Which makes it A Good Thing that anyone who is concerned about such centralization is free to run a full node.

So, centralization endangers security and fungibility, but "the network would operate just fine" if validators were centralized into smaller and smaller groups that omitted the vast majority of the userbase? At what point does it become "too centralized?" Hash power is already highly centralized among a handful of pools; merchants almost entirely transact through two central processors (Bitpay and Coinbase); there are a handful of prominent exchanges and a couple handfuls more of dodgy, scammy ones. The network would run just fine if the entire userbase trusted these few entities to uphold the integrity of all transactions?

If the entire userbase stopped running nodes (save for this few dozen entities), are you suggesting that it would not be easier for say, a miner or a government, to mount a Sybil attack on the network?

Quote
How about 100 nodes? 1000? Where is the limit, and what is your evidence that typical users are safe from, say, Sybil attacks?

Classic increases the block size in a honest manner. Core increases the block size as well through chicanery. A fully validating node still needs signatures, so it's actual Max block size is anywhere from 1MB to ~4MB, depending upon how much multisig is in that block.

Honesty is not the issue here; network security is the issue. Sure, a fully validating node still needs signatures. And in approaching the increased cost (upload bandwidth) of increased throughput, we can externalize the cost to all nodes or we can distribute the cost to those who are using it (and who can pay for it). Again, if you are suggesting this is a trust issue, can you outline a theoretical attack that exploits the signature chain?

Because core requires as much or more data transfer and storage as does classic, as per proportion of multisig.

Similar increase in throughput without externalizing the cost to all nodes and thereby causing the perpetual drop in nodes to continue.

Quote
I don't think anyone here is really arguing about disk space. But could you provide some evidence that "an insignificant proportion of current nodes will stop node-ing" if block size doubles? No data I've seen appears to support that, but maybe you could provide some.

Merely a rationally reasoned conclusion. Can you support an assertion to the contrary?

So... you cite no evidence. Then you cite "rationality" as the basis for your claims, without explaining the rationale. That's called an opinion, nothing more.

For starters, there is a clear negative correlation between node health and block size. As average block size increases, average node count falls. This is a recurrent trend over past years.

Logically, since bandwidth is the only major infrastructural bottleneck for nodes, one could surmise that if bandwidth requirements were low enough, non-broadband -- mobile, dialup, ham radio, etc. -- users could operate a node, and further, that some would run nodes (after all, some users evidently still run nodes despite the availability of SPV). Conversely, as bandwidth requirements increase against a static bandwidth cap (as is commonly in place for cable customers), that decreases the bandwidth a user can allocate towards running a node or towards another bandwidth-heavy activity, increasing the likelihood that the node will simply be shut down for lack of resources. When only 25% in a highly industrialized country (the US) have fiber connections, it's prudent to assume that most people are limited to at best cable internet (frequently bandwidth-capped), and at worst non-broadband connections that prohibit users from running a node at all.

Given the persistent negative correlation between node health and growth in block size, and given that upload bandwidth is a measurable pressure on nodes who are known to have bandwidth limits (speed in the case of DSL and satellite, and total bandwidth consumed in the case of cable), we have good reason to believe that there is a causal relationship at work here between node health and growth in block size.

Anecdotally, many on this forum have echoed the sentiment -- as do I -- that bandwidth is the only pressure that has ever caused us to shut down our nodes. I was recently lucky enough to upgrade to a fiber connection, but as recently as a few months ago, I refused to run a node full-time due to being capped by Time Warner Cable. Further, the operation of VPS nodes (which we cannot realistically estimate the quantity of) is in theory problematic since these nodes are not physically controlled by the operator, causing us to question how much it really adds to decentralization in attack scenarios.

Quote
Intelligent miner =/= honest miner. Non-mining nodes reflect the interests of non-mining users, serving as a check on the power of miners. Non-mining nodes, by not trusting mining nodes and enforcing the protocol are integral to the integrity of the p2p system.

No, they are not. It is trivial for any miner to implement their own forwarding node, connecting explicitly to other miners which share their philosophy, completely bypassing any set of independent nodes.

If miners could trivially bypass any nodes, that would suggest that a Sybil attack would be trivial to mount, correct?--as they could simply bypass all honest nodes. Miners have carried out attacks in the past, so why not Sybil attacks? If it's so easy to bypass the entire decentralized node system, why aren't miners attacking the userbase for profit on that basis? If miners can trivially control most nodes, they can censor and double-spend all day until the cows come home. Yet.... this doesn't happen. Why?

That miner you mention is also competing against all other miners to have his blocks validated by most nodes, so the presence of honest nodes disincentivizes him from attempting any attack on that basis unless he can profitably mount a Sybil attack. "Bypassing any set of independent nodes" is meaningless if most nodes on the network are still enforcing the same rules. Either the miner is honest and bypassing any nodes is a moot point, or the miner is dishonest and bypassing any nodes causes his blocks to be rejected (e.g. as with double-spends) if he does not control most nodes (failed Sybil attack).

Ignoring hashpower-based attacks, I don't see any basis for your point unless this miner (or mining consortium) controls most nodes on the entire network, or sufficiently clusters Sybils in a given area sufficient to censor or double-spend transactions in that region. If he does not control most of the network, bypassing independent nodes accomplishes nothing.

Summary:

1) 'Node Centralization' is no reason to choose between 1MB Core and 2MB other
2) Doubling block size will have negligible impact upon node count
3) In the end, nodes are negligible marginal utility anyhow.

With IBLTs and weak blocks, and after segwit, I'd be more confident in #1. For now, the trend in node health gives cause to be wary of putting any unnecessary pressure on bandwidth. In reality, the biggest reason to choose between 1MB Core and 2MB Classic (that is the only option based on released software) is that a hard fork implemented without miner consensus is very likely to permanently break bitcoin into multiple ledgers. I don't think you've provided a modicum of evidence for #2. Regarding #3, the utility of, and incentive to run[ning] a node are elusive but not non-existent. Non-mining nodes are essential to maintaining the integrity and security of the protocol, and many on this forum including myself and David Rabahy above me, can attest to running nodes because we want the system to succeed and/or are invested in its success. But the more you squeeze node operators, the less of us there will be.