If you limit the transaction size to 1MB, nothing changes from having a 1MB blocksize without a transaction size limit.
There may or may not be attack vectors that could work with 2x 1 MB transactions. Just because it seems safe, that doesn't mean that it will be.
The whole reason huge transactions are unsafe is because of quadratic scaling, which won't work if you split up the equation across two transactions that scale linearly with each other.
Who would ever need to fill an entire block with 1 transaction?
I'm not talking about normal usage when it comes to security problems.
So you agree there's no normal use case for 1MB or larger transactions, so why do you oppose limiting transactions to 1MB while increasing blocksize? That gets rid of your man argument (security) while also allowing more transaction throughput, and therefore more users.
Are you mad because I can destroy your entire argument in 2 minutes of typing?
You are just disagreeing for the sake of disagreeing at this point.
Said every person when losing their ground.
Sorry but I just destroyed your argument, and you have no ground to disagree on, except the right to disagree, but you have no actual logical reason to disagree.
I have proven this to be factually correct, and it remains true until you prove proof that your disagreement is backed by logical reasoning and facts.
That doesn't mean you can't look at other coins as an example.
It's one thing to fork something small and centralized, and another to fork Bitcoin.
TIL ETH is centralized. Yeah they have their differences, but centralization is not one of them.
Bitcoin would become the myspace of crypto
Stop being "spoon fed" (as franky1 would put it) by Ver & co.
I don't even know Ver, but from the few interviews I have heard from him he seems like a reasonable and nice guy, although I don't always agree with all he sais.
It does. You can't just say this every time someone disproves your statements.
It does not, as can be seen with your lack of experience in regards to large scale infrastructure.
And you can teach me?
First of all, technology has become much cheaper in the past 6 years (on average at least). And besides, who really has a hard drive measured in gigabytes anymore?
Strawman argument.
I'm sure this is in no way a strawman argument.
If anything, your conjecture is based on a strawman argument, because you are defending imaginary nodes (strawman) that would be blocked out of being a node because they can't run an imaginary node on on imaginary system with a hard drive that belongs in the 90s and internet that is basically quite literally smoke signals.
Any computer made after 2000 and even the most basic internet connection has no problem at all running a 20MB node. And with no problem at all I mean that it is barely noticeable that you are even running a node while at the same time actually using your computer for other tasks.
If anyone is having strawman arguments it's you, because you're using strawman nodes to back up your defense. I am talking about actual technology and actual nodes, that are not aaffected by larger block size in the slightest.
If you can find actual persons that actually will be affected by a larger blocksize, be my guest, I'd like to hear their testimonies.
And since no one is going to buy a hard drive of less than 1TB anyway (are they even sold anymore?) your argument is mood.
Yes, they're being sold and yes I plan to buy a 1/2 TB drive.
That's still plenty to run 20MB blocksize for several years, even if you falsely assume every single block is full. (and it's still a drive measured in terabytes)
Even worse with network. Does anyone seriously have a bandwidth so low that 266kb/s is a problem? (that would be 20MB blocks).
You obviously aren't factoring in the primary bottleneck, but let's go with this. How did you derive this number, i.e. by calculating what?
It's quite easy, you just divide the blocksize by the average time it takes to find a block. Obviously, you would need internet faster than that to be actually useful as a node, but on average your internet would use that amount of data. When a new block is found, it would for a few seconds use more bandwidth, followed by a couple of minutes of almost no activity at all. The point is though that 266kb/s is ridicilously small, and everyone has internet orders of magnitudes faster than that, so even for the most basic internet user, the network usage would not even be noticable.
And what is the primary bottleneck then? I'm sure memory won't be a problem with blocksizes smaller than a few gigabyte.