It's good that instrumentation and controls are coming. IMO they are a couple years late.
I must have missed your contributions... Less snarkily, they required upgrades to the commonly deployed p2p behavior before they could be deployed.
If done right, controls would not slow down getting current blocks.
Right now they don't; though later resource limits will also be able to limit at the tip (as a last ditch option-- we'd rather have a slow node than no node.)
It would be entirely reasonable to charge newbies a fee to offset these real costs.
And guarantee a loss of decentralization. No thanks. But the costs of bringing up new peers should rest on those who have plenty of resources and don't mind spending them on that... and it's much closer to that now in 0.12.
As to constant factors. They are easily dismissed by theoreticians. Practical people who design, deploy, manage and operate businesses that provide cloud computing services live or die by small percentage changes in constant factors.
Agreed. One of the most regrettable things about this block-size snafu in my view is this characterization of 800% or 2000% increases as "modest" or a _doubling_ as being too small to consider. Some people talk in terms of "plan for success", but can't seem to accept the possibility that the system will successfully respond to demand and capacity without removing resource limits in advance.
In this case, it appears that there are tremendous opportunities for two types of improvements here. First, in reducing the amount of data that an operational node needs to transmit and receive, and second in tuning the nodes and their networking environment to effectively utilize the available resources.
Absolutely, and that's what being done; more help is always welcome.
Is there more data behind the plot? It would be useful to look at the distribution and the performance for specific pools vs. the topology involved, including machines and link rates. (I can appreciate that some of this data may be proprietary or otherwise unavailable.)
I have detailed per pool data; but I would prefer to not publish information that could be used for competitive comparison because there are various ways to cook the results, including ones which are bad for the network (in particular always perform verification free / SPV mining). E.g. if bar pool feels pressure because it's slower than foo pool, it'll switch to spv mining. The information broken down by pool is pretty noisy, because many find fairly few blocks. There are performance differences, though the spread isn't huge over most of them; there is one or two that reliably perform quite poorly-- and these end up always excluded by that "time to reach half pools" metric.
I did all manner of multifactor analysis; including checking an IsChina effect and considering all pairwise pairs. IsChina in aggregate didn't have a statistically significant effect when also considering the individual pool performance. Likewise, none of the pairwise parameters rose to significance. A couple of the pools have a statistically significant this-pool-sucks effect; but per pool bandwidth numbers didn't differ significantly from the mean overall (I assume more due to not having enough data, and because the impact depends both on block sources and destinations, rather than no effect).
I can speak from personal experience as a small scale miner. Last summer I was solo mining with two S3s that I was still running despite the heat and managed to score a block on solo.ckpool.org. The 0.5% fee was well worth it for use of a high bandwidth connection, because the orphan risk out of my DSL based home office would have been unacceptable. (You might ask, why was I solo mining? That's a another question, but it certainly looked like some of the pools had a consistent run of bad luck indicating that something was seriously wrong.)
FWIW, P2pool integrates an efficient relay mechanism similar to the relay network client protocol. In spite running on all manner of home equipment; it had the lowest orphan rate of any pool back when it had enough hashrate to measure it; you can quasi solo on it by turning your share difficulty all the way up to 1/10th the block difficulty.