Post
Topic
Board Bitcoin Discussion
Re: Stop fuckin' around, fork the son-of-a-bitch already.
by
Lauda
on 22/09/2016, 15:13:28 UTC
The whole reason huge transactions are unsafe is because of quadratic scaling, which won't work if you split up the equation across two transactions that scale linearly with each other.    
I wasn't talking about that when I mentioned the potential of unknown attack vectors.  

So you agree there's no normal use case for 1MB or larger transactions, so why do you oppose limiting transactions to 1MB while increasing blocksize?
I don't agree with that. I haven't thought about it, and I'm pretty sure that there may very well be a normal use case for some business.

Are you mad because I can destroy your entire argument in 2 minutes of typing?
1) I don't get "mad" when someone rationally shows supreme arguments. 2) You did no such thing.

I have proven this to be factually correct, and it remains true until you prove proof that your disagreement is backed by logical reasoning and facts.  
You have done no such thing. You're starting to resemble Veritas.

And you can teach me?
I may or may not be able to, not that it would matter.

First of all, technology has become much cheaper in the past 6 years (on average at least). And besides, who really has a hard drive measured in gigabytes anymore?    
Strawman argument.
I'm sure this is in no way a strawman argument.
It's a pure example of strawman fallacy. I never argued that "technology didn't become cheaper" did I? Don't attempt to use fallacies again, else we end up with nonsense as "Strawman nodes".  Roll Eyes

Yes, they're being sold and yes I plan to buy a 1/2 TB drive.
That's still plenty to run 20MB blocksize for several years, even if you falsely assume every single block is full. (and it's still a drive measured in terabytes)
20 MB per block x 6 blocks per hour x 24 hours a day x 365 days a year = ~1051 GB per year. Please explain how a 1/2 TB drive (aka 500 GB drive) would run for "several years".

It's quite easy, you just divide the blocksize by the average time it takes to find a block.
20 MB / 10 minutes = 2 MB per 1 minute. 2 divided by 60 = 0.03 MB/s. Let me tell you why your thinking is flawed (not that you're going to admit this). If a node is downloading at this speed, it will never catch up. Why is that? By the time that it downloads a 20 MB block, it is likely that another one will be created. The node would still be validating the previous block in addition to having the next one. I do wonder how long it takes to validate a 20 MB block on decent hardware though.

And what is the primary bottleneck then? I'm sure memory won't be a problem with blocksizes smaller than a few gigabyte.
Validation time.

a 2tb hard drive is only 800,00 kuna (im guessing your still in croatia) (£90 : $120 for those not wishing to convert kuna to western currencies)
No, I am not and have never been in Croatia.