So it's really a robust protocol. Not cheating like Avalanche with this randomized sampling, which will probably cause them major syncronization problems down the road.
I can see that. But how come avalanche got cheating on their randomized sampling? I read some text about it and also Im not so deep with regards to their tech but somehow they have a working protocol at the moment which already up and running.
I cam see potential on what you enumerate, but until we can see this running like avax then dont drag their project as if yours are still in the process of theoretical right?
Thanks for the reply.
I am not really bashing avalanche. Just explaining their shortcomings. Usually, in Practical Byzantine Fault Tolerance (pBFT), you need at least 51% or 50%+1, votes to confirm a decision. They decided that 5,10 or 20% is enough as the probability is high enough that a consensus will be reached. This was done for speed and decentralization. Although the system might work in normal conditions, the system still has potential attack vecors and synchronization issues that might be discovered only with more load and traction. Especially in the case of network partitions, where an attacker can force where the votes should be coming from. The design sounds very risky...
Our system has normal 51%+ voting, speed is gained with vote re-usage and compression and delegation. And we have a demo testnet so it's not theoretical.