I certainly think the nuance here is far more easily explained as a tension between two completely legitimate engineering objectives which we as a community have to carefully compromise over, rather than just some right vs wrong binary question of "is a bigger blocksize bad".
From one perspective bitcoin as an unstoppable, unregulatable, absolutely trustworthy egold practically any block size is bad... smaller is pretty much always better. People are already using SPV wallets in large numbers because of the current cost of validating the chain, people are choosing centeralized pools over P2pool because of the cost of dealing with the chain. There is no doubt in my mind that this is already hurting the decentralization of the system to an unacceptable degree: Kidnapping _two_ people is enough to do enormous chain rewrites right now.
And at the same time from another perspective Bitcoin as practical unit for common every days payment cheaply available to as many people as possible practically any block size restriction is bad. It's also completely clear to me that transaction costs even insubstantial ones have already turned some people off from using Bitcoin. A tiny 0.0005 bitcent fee is _infinitely_ worse than zero by some metrics. If we can first just accept that each of these views are valid conclusions from different objectives for the system... then after that we can have a polite discussion about where on the compromise spectrum the system will delivers the best value to the most people.
I hear you, and agree completely. There are different visions for Bitcoin, some people are happy with a niche off-grid currency/payments system, some (more?) people are keen to see Bitcoin achieve greater goals, a global system which makes existing fiat currencies and card companies all but obsolete. Leaving those visions aside however there is indeed a middle ground on the block size issue between no-change and infinite blocks. I have always considered an algorithmic increase keeping ahead of demand as the conservative option.
An interesting question that this begs is how do we measure the decentralization impact. Right now the best I have is basically a "gmaxwell-test" what is my personal willingness to run a node given a certian set of requirements. Given that I'm an exceptional sample in many regards including having hundreds of cores and tens of terabytes of storage at home, if some scale level would dissuade me it's probably a problem.
Anecdotal evidence like this is helpful, but surely we can do better. This is the nub of the argument about Peter's video. He makes no attempt whatsoever to provide a numerical measurement of decentralization before and after 1MB average block sizes occur. Perhaps nodes should be able to issue query responses which contain information about their transaction handling capacity.
So what is the point of having decentralized validation system if a user has to choose between 5 centralized solutions to make a transaction?
In Peter's vision of the future those centralized solutions will all be Fidelity-bonded (Chaum-trusted) banks.
https://bitcointalk.org/index.php?topic=146307.0While I think the idea of them is very good, they must take market share of transactions
on their own merits, not because Bitcoin is deliberately crippled.