Post
Topic
Board Development & Technical Discussion
Re: Addressing Block size and occasional mempool congestion
by
ABCbits
on 05/06/2024, 08:46:06 UTC
Quote
making OP_FALSE OP_IF ... OP_ENDIF non-standard
It could help, but would be not enough, when you have mining pools, willing to bypass such limitations.

And that's why i also mention accepting multiple method. As for mining pool, i only hope they continue to charge premium for adding non-standard transaction. For example, https://mempool.space/ suggest 18 sat/vB for no priority and 27 sat/vB for high priority while https://slipstream.mara.com/ currently accept non-standard TX which have rate 81 sat/vB.

Quote
Your statement isn't relevant with today's condition
It somewhat is, but taken from another angle: big mining pools will handle it, but regular users may stop running non-mining nodes. And that will indirectly lead to mining centralization, because then, nobody except big pools will agree to run a full archival node 24/7. And in that case, it will be possible to skip more and more steps, if users will stop caring about validating the output, produced by those mining pools.

Fair point, although it's not like i suggest huge block size increase either.

--snip--

I think the main factors besides block propagation are CPU and memory requirements. 4 MB blocks (the current maximum) needs, according to a Bitfury study, about 16 MB of memory. So on a state-of-the art PC with 16MB+ RAM, you can still run a full Bitcoin node in the background, even if it would already affect your other activities a bit probably. But if the block size was significantly higher, you would need a dedicated device for that purpose, and not exactly a cheap one.

Do you mean this study https://bitfury.com/content/downloads/block-size-1.1.1.pdf? After many years, i realize they don't consider massive UTXO growth, compact block (which massively help block verification/propagation and reduce bandwidth) and other things.

1. What do you think is a possible solution to this problem?.
I've seen some people focus on single approach (such as only focus on LN or only focus on block size increase). But IMO we should accept various method to mitigate the problem, such as making OP_FALSE OP_IF ... OP_ENDIF non-standard, increase block size and use LN/sidechain (if it match how you use Bitcoin) altogether.
All corrections noted.
Most people have been left with no choice than to master the use of LN. Due to some technicalities, I am not sure if everyone might want to learn it except the congestion matters gets out of hand, forcing majority to learn. Good choice.
Increasing size of blocks is of course the leading solution which lots of people have doubts on. Don't you think increasing size of blocks would affect mining?, thereby requiring more mining powers. Sorry for asking too many questions, Can you please clearify me on this?.

No, maximum block size doesn't require higher mining power/hashrate. After all, mining basically perform sha256d on block header which always have size 80 bytes.