Post
Topic
Board Development & Technical Discussion
Re: How a floating blocksize limit inevitably leads towards centralization
by
Stampbit
on 13/04/2013, 21:34:55 UTC
If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.

At 1MB, you would need a ~1.7Mbps  connection to keep downloading time to 6s.
At 10MB, 17Mbps
At 100MB, 170Mbps

and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.
Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.
Hmm.  Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header.  Milliseconds amount of time.  Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size?  Plenty of room for "optimization" here were it ever an issue.

Fake headers / tx lists that don't match the actual body?  That's a black mark for the dude who gave it to you as untrustworthy.  Too many black marks and you ignore future "headers" from him as a proven time-waster.

Build up trust with your peers, just like real life.

Maybe im missing something here, why arent blocks downloaded in the background as current blocks are being worked on? Why is this bandwidth issue even an issue?