Post
Topic
Board Development & Technical Discussion
Merits 7 from 1 user
Re: Dev & Tech Opinion on Mike Hearn's "Crash Landing" scenario
by
gmaxwell
on 08/05/2015, 08:05:38 UTC
⭐ Merited by ABCbits (7)
We've actually hit the soft limit before, consistently for long periods and did not see the negative effects described there (beyond confirmation times for lower fee transactions going up, of course).

If the frightful claims there are true then its arguably the Bitcoin is just doomed; after all-- there is no force on earth that constrain how much transactions people might seek to produce, any amount could be filled at any time.  And of course, there are limits-- even if you abandon decentralization as a goal entirely, computers are made out of matter and consume energy; they are finite, and Bitcoin's scale is not increased by having more users. There are single services today who do enough transactions internally to fill gigabyte blocks, if these transactions were dumped onto the chain--- so whatever the limit is, it's likely that a single entity could hit it, if it decided to do so. Fortunately, the fee market arbitrates access neutrally; but it does that at arbitrary scale.   Mike completely disregard this because he believes transactions should be free (which should be ringing bells for anyone thinking X MB blocks won't be constantly X MB; especially with all the companies being created to do things like file storage in the Bitcoin blockchain).

One of the mechanisms to make running against the the fee more tolerable which is simple to implement and easy for wallets to handle is replace-by-fee (in the boring, greater outputs mode; not talking about the scorched earth stuff)-- but thats something that Mike has vigorously opposed for some reason. Likewise CFPF potentially helps some situations too-- but it's mostly only had technical love by Luke and Matt.  It's perhaps no coincidence that all this work made progress in early 2013 when blocks were full, and has not had much of any attention since.

More than a few of the claims there are just bizarre, e.g.
Quote
Some Bitcoin Core developers believe that the reject message should be something only used for debugging and not something apps can rely upon. So there is no guarantee that a wallet would learn that its transaction didn’t fit.

The particular issue there is that the reject messages are fundamentally unhelpful for this (though they're a nice example of another railroaded feature, one that introduced a remotely exploitable vulnerability).  The issue is that just because node X accepted your transaction doesn't tell you that node Y, N hops away did or didn't, in particular it doesn't tell you if there is even a single miner anywhere in the network that rejected it-- what would you expect to avoid this? every node flooding a message for every transaction it rejects to every other node? (e.g. a rejection causing nodes^2 traffic???). Nodes due produce rejects today;  but it's not anything about anyone's opinion that prevents a guarantee there, the nature of a distributed/decenteralized system does. The whole notion of a reject being useful here is an artifact of erroneously trying to shoehorn in a model from centralized client/server systems into Bitcoin which is fundamentally unlike that.

Quote
I don’t know how fast this situation would play out, but as Core will accept any transaction that’s valid without any limit a node crash is eventually inevitable
The amount of transactions in memory is strictly limited by the number of outputs in the UTXO set size; as well as a rate limiter/priority for free transactions; there is technically an upper bound (though its not terribly relevant because its high and the limiter means it takes forever to reach it).  Of course, its trivial to keep the mempool constantly bounded but there has been little interest in completing that because the theoretical large size is not believed to be practically exploitable given the limits; -- there are patches though, though they're not liked those who think that never forgetting anything helps zero-conf security. (I don't generally, considering that there are much easier ways to defraud on zero conf than hoping some node forgets a very low priority zero conf transaction.)

The comments about the filling up memory stuff are grimly amusing to me in general for reasons I currently can't discuss in public (please feel free to ping in six months).

Overall, I think the article does a good job of suggesting that that the goal of the recent blocksize proposal is a far more wide spanning change than just increment the blocksize to make a necessary room, and that its also a move to make a substantial change to the original long term security model to an underspecified one which doesn't involve fees; a trajectory for an unlimited blocksize that processes any transactions that come in, at any cost; even if that means centralizing the whole network onto a single mega-node in order to accept the scale.  Or at least that appears to be the only move that has an clear answer the case of 'there will be doom if the users make too many transactions' (The answer being that the datacenter adds more storage containers full of equipment).