Post
Topic
Board Development & Technical Discussion
Re: The MAX_BLOCK_SIZE fork
by
gmaxwell
on 02/02/2013, 21:09:59 UTC
Actually, that thread outlines the way that future PCs (if not smartphones) could conceivably run a full node (or "almost-full" node) even with no limit / floating limit.
There are many merits to etotheipi's writing but what he proposes massive _increases_ the IO and computational cost of running a full node (or a fully validating but historyless node) over a plain committed UTXO set for validation. The increased node burden is one of the biggest arguments against what he's proposing and I suspect will ultimately doom the proposal.

I have seen nothing proposed except moore's law that would permit full validation on "desktop" systems with gigabyte blocks.

Quote
I just can't see why this artificial limit that was intended as temporary from the start should be accepted as an immutable part of the protocol.
There are plenty of soft limits in bitcoin (like the 500k softlimit for maximum block size). The 1MB limit is not soft. I'm not aware of any evidence to suggest that it was temporary from the start— and absent it I would have not spent a dollar of my time on Bitcoin: without some answer to how the system remains decentralized with enormous blocks and how miners will be paid to provide security without blockspace scarcity or cartelization the whole idea is horribly flawed.  I also don't think a network rule should be a suicide pact— my argument for the correctness of making the size limited has nothing to do with the way it always was, but that doesn't excuse being inaccurate about the history.