Post
Topic
Board Development & Technical Discussion
Merits 6 from 4 users
Topic OP
Addressing Block size and occasional mempool congestion
by
Felicity_Tide
on 04/06/2024, 05:50:28 UTC
⭐ Merited by NotATether (3) ,vjudeu (1) ,ABCbits (1) ,d5000 (1)
Whether we like it or not, the problem of scabiliity is not a topic that should be treated as a done deal. We sometimes don't talk much about it especially when the network is working smoothly and no obvious signs of congestion. But we later go back to same problem when TX fees increases, and so many pending transactions are all seated at the mempool waiting to be confirmed, at this point in time, those who are able to pay higher fees get their transactions ahead of others, but for how long are we going to continue like this ?.

I spent several hours, roaming around the internet and trying to figure out every suggested plans in addressing the problem of scalability. I even read across few BIPs such as BIP101 and others that have all been rejected so far by the Bitcoin community, as they weren't satisfactory enough to address the issue of block sizes.


The idea of making a block bigger has been embraced and also discouraged by the Bitcoin community due to prons and cons that are attached. The introduction of Segregated Witness (SegWit) which was proposed in BIP-148, allows block capacity to be indirectly increased thereby removing signature from Bitcoin transaction data. This also means that there are more space to accommodate more transactions, only when certain parts of the transaction is removed. To me, that sounded healthy, but it showed the extent the Bitcoin community was/is willing to go inorder to address this issue. For every SegWit address, it can begin with bc1 or 3, but it main purpose is to offer lower tx fees by taking up less block space. Even with this implementation, we haven't been able to say "Goodbye" to congestion.

With the inability for SegWit to address the issue of small block size, SegWit2x was introduced basically to increase the size of blocks to 2mb for more accommodation of transactions, but this idea wasn't enough to get approval from the community due to the absence of replay protection. Meaning, the absence of this protection could cause a replay attack. Lightning network on the other hand requires creating a payment channel between two individuals. This was suppose to address the issue, but it doesn't have a say in the size of blocks either. And also, the technicalities behind it has not carried everyone along in understanding it.

The issue of congestion is still very much on ground, though it has failed off as there are no current events or any rush to trigger such. Certain periods like halving, Bull run and Bear run are ideal times to witness such. Unarguably, congestion will come, which some of us must have prepared the LN as a backup plan, while others can save themselves with the extra tx fees, but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.

Just a simple but decision making question for both developers and none developers today:
1. What do you think is a possible solution to this problem?.


I am 100% open to correction as I still see myself as a learner. Pardon any of my error and share your personal opinion


GitHub: https://github.com/bitcoin/bips/tree/master