Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
TPTB_need_war
on 07/07/2015, 08:37:05 UTC
IBLT doesn't currently exist, and other mechenisms like the relay network protocol don't care about mempool synchronization levels.

IBLT does exist as it has been prototyped by Kalle and Rusty. It is just nowhere near ready for a pull request.
It has never relayed a _single_ block, not in a lab, not anywhere. It does _not_ exist. It certantly can and will exist-- though it's not yet clear how useful it will be over the relay network-- Gavin, for example, doesn't believe it will be useful "until blocks are hundreds of megabytes".

But don't you think that I'm saying anything bad about it-- I'm not. Cypherdoc was arguing that mempools were (and had) to be the same, and cited IBLT as a reason---- but it cannot currently be a reason, because it doesn't exist.  Be careful about assigning virtue to the common fate aspect of it-- as it can make censorship much worse. (OTOH, rusty's latest optimizations reduce the need for consistency; and my network block coding idea-- which is what insired IBLT, but is more complex-- basically eliminates consistency pressure entirely)

Quote
I recall that you had a tepid response summarizing the benefit of IBLT as a x2 improvement.  Of course this is hugely dismissive because it ignores a very important factor in scaling systems: required information density per unit time. Blocks having to carry all the data in 1 second which earlier took 600 seconds is a bottleneck in the critical path.
It depends on what you're talking about, if you're talking about throughput it's at best a 2x improvement, if your'e talking about latency it's more.  But keep in mind that the existing, widely deployed block relay network protocol reduces the data sent per already known transaction _two bytes_.

Compression of exchange of differential sets is interesting. Your extensive real-world experience in codecs is really evident at the page you linked which is outside my current knowledge. I would need to devote some time to fully digest the specifics of your proposal (and perhaps Rusty's optimizations to IBLT). I do understand the concept that error correcting code allows us to reconstruct a signal in a noisy channel.

As you've pointed out today, large hashrate will still have the latency of 0 for all the blocks it mines. So unless these methods can bring latency down to a level which has a holistic (in and out-of-band game theory) stable equilibrium in economic cascade less than oligarchy centralization, then it won't necessarily stop censorship.

Let's assume we can optimize away latency (which is more general to the economics than just orphan rate) in the current design of PoW cryptocurrency, yet we are still faced with the unavoidable centralization due to block reward variance (that I explained today) and that due to transaction rate exceeding the bandwidth of home internet connections which vary widely around the world and between wireless and wired.

And then if you were really serious about Ultimate Decentralization™ (the ™ is a joke), everyone who transacts would be a miner. In that case, the bandwidth of home connections could be a critical consideration depending on the design of the consensus mining system.