Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 03/07/2014, 13:28:21 UTC
I should expand a bit on what I have in mind.

The idea is that I could send a request to a peer to download a block in a streaming fashion. Every 512 transactions they could include a proof that the previous 512 transactions do in fact make up a piece of the block that I've requested, via the merkle tree.

This allows me to then stream the block through validation, without having to wait for it to be fully downloaded first. As I commit each chunk of the block the validation can kick in and process that chunk.

If another peer asks me for a streaming copy of that same block, I can also start streaming the block out to them before I've finished receiving it myself.

On the receiving end, you wouldn't be doing any more work than you would normally. If a block is invalid, you could potentially find that out sooner than you could currently, before you've had to download the entire thing.

If you start sharing out the block before you've finished it, that would lead to more upstream bandwidth usage if the block does turn out to be invalid. I think mined headers is enough to mitigate that risk.