Post
Topic
Board Development & Technical Discussion
Merits 13 from 6 users
Re: Addressing Block size and occasional mempool congestion
by
vjudeu
on 04/06/2024, 06:35:01 UTC
⭐ Merited by BlackHatCoiner (4) ,pooya87 (4) ,ABCbits (2) ,Felicity_Tide (1) ,d5000 (1) ,DdmrDdmr (1)
Quote
but for how long are we going to continue like this ?.
As long, as needed, to reach transaction joining, and improve batching.

Quote
Even with this implementation, we haven't been able to say "Goodbye" to congestion.
Because those tools are not there, to get rid of congestion for legacy transactions. They are there, to allow cheaper transactions for those, who opt-in. Everyone else will pay for that, as usual, because those changes are backward-compatible.

Quote
but this idea wasn't enough to get approval from the community due to the absence of replay protection.
1. It was because of hard-fork, not because of replay protection.
2. If you want to introduce replay protection, it can be done at the level of the coinbase transaction, but BTC simply didn't introduce replay protection as a soft-fork, and altcoins like BCH didn't bother to make it "gradually activated", or to maintain any compatibility in sighashes. Imagine how better some altcoins could be, if all of their transactions would be compatible with BTC, and if everything, what was confirmed on BCH, would be eventually confirmed on BTC, and vice versa. Then, you would have 1:1 peg, and avoid a lot of issues.

Quote
Meaning, the absence of this protection could cause a replay attack.
It is a feature, not a bug. For example, some people told about things like "flippening", where some altcoin would reach bigger hashrate than BTC, and take the lead. But: those altcoin creators introduced incompatible changes, which effectively destroyed any chance for such "flippening". Because guess what: it is possible to start from two different points, then reach identical UTXO set on both chains, and then simply switch into the heaviest chain, without affecting any user. But many people wanted to split coins, not to merge them. And if splitting and dumping coins is profitable, then the end result can be easily predicted.

Quote
This was suppose to address the issue, but it doesn't have a say in the size of blocks either.
If you want to solve the problem of scalability, then the perfect solution is when you don't have to care about things like the maximum size of the block. Then, it "scales": if you can do 2x more transactions, without touching the maximum block size, then it is "scalable". If you can do 2x more transactions, and it consumes 2x more resources, then it is not "scaling" anymore. It is just "linear growth". And it can be done without any changes in the code, just release N altcoins: BTC1, BTC2, BTC3, ..., BTCN, and tada! You have N times more space!

Quote
but what's now left for others like myself rather than to sit and wait on the queue along other transactions that are available on the mempool ?.
If you need some technical solution, then you need a better code. Then, you have two options: write a better code, or find someone, who will do that.

Quote
What do you think is a possible solution to this problem?.
Transaction joining, batching, and having more than one person on a single UTXO in a decentralized way.