Post
Topic
Board Development & Technical Discussion
Re: Maintaining the growing blockchain ledger size in local full nodes
by
ranochigo
on 18/11/2017, 08:02:54 UTC
It could work like pruned mode: download everything but store only some. Or it could be selective downloading. In both cases, like it does already, the blockchain headers can be downloaded and kept in full to provides some verification.
So you're just solving the storage problem? You're probably ending up connecting to hundreds of nodes just to download the whole Blockchain. Block headers doesn't show if a block is valid. That being said, the synchronization could also end up being a LOT longer. Current implementation allows several peers to provide the blocks. With your implementation, only one peer can provide the specific portion of blocks, making it a bottleneck.
I'm not sure of the protocol details, but that's relatively easy to solve. Currently it already finds the full nodes among the pruned and SPV. Also current full nodes aren't always up-to-date to the latest block, during startup resync. As for deciding what to store, maybe a probabilistic approach could work, or something fancier that involves probing the network.
It's not easy. What does the full node being not in sync relate to the problem? Nodes cannot see the entire network and no one can connect to all of them. Unless you can somehow decide to centralise Bitcoin and have nodes connect to a central server, the distribution of blocks will be severely lopsided.
That's unlikely the be viable for long. Blocks are already mostly full today, and the network is congested. If SegWit takes hold it might reach 100GB/year or more (not sure if everyone needs to store the witness part?) within months, and even that's not going to be enough for much longer if the network is to grow further.
Well, to be fair, not everyone has to run a full node. Since you're just solving the bandwidth part, might as well as just have a pruned node instead of going through the trouble.