Then we should reduce the space to 10kb, allowing only $10k+ tx because buying coffee with bitcoin is pointless, right?
Note that mining pools can do so, without asking anyone for permission. And they didn't, for some reason. So, you can ask them, why they didn't put that kind of limit in the blocks they produce? Also, you can ask some node operators, why they collect and process transactions, which are below one satoshi per virtual byte? For example here, you can see some grey block of cheap transactions at the bottom of the chart:
https://jochen-hoenicke.de/queue/#BTC,all,weight,0To be honest I would love for the ones being against bigger blocks would make up their mind and form a group so they don't go against each other cause I keep hearing contrarian arguments
Note that keeping the limit as it is, is the easiest thing to achieve, because it is about preserving "status quo". Which means, no matter if you want to increase or decrease some default values, related to the size of the block, or to the default fees, or to some other default rules, you need to reach consensus. And guess what: reaching consensus is hard, even in some topics like Segwit or Taproot, it took years. And convincing miners to set their max block size limit into 32 MiB (the default from the first version, used by Satoshi) is as hard, as convincing them to go for 10 kB (there were times, when the practical limit was lower than 1 MB, because of BDB locks, and stuff like that).
Also, increasing the size of the block makes this attack worse, than it currently is:
https://bitcointalk.org/index.php?topic=140078.msg1491085#msg1491085The bandwidth might not be as prohibitive as you think. A typical transaction
would be about 400 bytes (ECC is nicely compact). Each transaction has to be
broadcast twice, so lets say 1KB per transaction. Visa processed 37 billion
transactions in FY2008, or an average of 100 million transactions per day.
That many transactions would take 100GB of bandwidth, or the size of 12 DVD or
2 HD quality movies, or about $18 worth of bandwidth at current prices.
There is one problem with that approach: verification. Sending the whole chain is not a problem. But verifying still is. And what is the bottleneck of verification? For example CPU speed, which depends on frequency:
2011-09-13:
Maximum Speed | AMD FX Processor Takes Guinness World RecordOn August 31, an AMD FX processor achieved a Guiness World Record with a frequency of 8.429GHz, a stunning result for a modern, multi-core processor. The record was achieved with several days of preparation and an amazing and inspired run in front of world renowned technology press in Austin, Texas.
2022-12-21:
First 9 GHz CPU (overclocked Intel 13900K)See? Humans are still struggling with reaching 8-9 GHz, and you need a liquid nitrogen to maintain that value. And more than a decade ago, the situation was pretty much the same. So, the CPU speed is not "doubled" every year. Instead, you have just more and more cores, and you have for example 64-core processor, instead of having 2-core or 4-core.
Which means that yes, you can download 100 GB, maybe even more. But is the whole system really trustless, if you have no chance of verifying that data, and you have to trust, that all of them are correct? Imagine that you can download the whole chain very quickly, but it is not verified. What then?
Also note, that if something can be done in parallel, then yes, you can use 64-core processor, and execute 64 different things at the same time. However, many steps during validation are sequential. The whole chain is a sequence of blocks. The whole block is a sequence of transactions (and their order does matter, if one output is an input in another transaction in the same block). The hashing of legacy transactions is sequential (also in cases like bare multisig, which has O(n^2) complexity for no reason).
So yes, you can have 64-core processor with 4 GHz each, but a single core with 256 GHz would allow much more scaling. And this is one of the reasons, why we don't have bigger blocks. The progress in validation time is just not sufficient to increase it much further.