A point worth considering: If the delegated set is fixed over a prolonged period of time, then its not trustless. If the delegate set changes constantly as the result of some random function, then it may be trustless depending on how large the source set is.
And those are not the only two possible ways to structure the period of delegation. In fact, the delegation membership paradigm in my design is one of my key epiphanies which makes it Byzantine fault tolerant.
There were "issues" with this approach that I wasn't happy with and found myself running around in circles.
Indeed there were key epiphany inventions I discovered which eliminated those circles. That is why I felt what I revealed today isn't enough for someone to figure out my design, unless they can have the same holistic design epiphanies.
Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.
The higher the frequency of delegate election, the worse the service they can provide
True (for various reasons such as ratios relating to propagation, denial-of-service, etc).
And you were correct to follow up with this post which I will delete:
I disagree, so long as there is a reliable method to allow everyone to know who the delegates are at any moment in time present or past, then selection could happen at 1 second intervals or less.
Network latency prevents everyone knowing who the elected delegates are once you drop below 10 second intervals.
But...
- at one extreme, the service is completely trust based and very efficient and at the other extreme there is no advantage at all to having delegates but they are completely trustless.
False assumptions galore.
Wait for the white paper. Amazing to me that what I invented seems so obvious in hindsight yet it isn't obvious to you all. Interesting.
All discussion about delegation should stop now. Otherwise I will be forced to delete some posts. We are cluttering this thread with too much detail.
Wait for the white paper then you will all understand.
It reduces the exposure of the system to dishonest nodes, as providing that the function output that determines the set selection is random, these dishonest nodes will never know when they will have an opportunity to be dishonest.
This then requires these nodes to be online constantly in the chance that they do get selected in the next round of delegates. If the selection set is sufficiently large, and the function output is random (or close to), then the time between subsequent selection may be quite long, thus acting as a discouragement due to costs.
Furthermore, if the selected set of delegates is short lived, then any disruption caused by selecting a dishonest nodes in the previous set will be minimal, as transactions that failed which are deemed legitimate can simply be re-presented a short time later against a new set of delegates.
You can increase resilience further if you delegate the work to ALL selected delegates instead of just one or a few of them. Then they all perform the same work and you can think about using the output from that as a basis for consensus.
You have to consider Sybils and other things too, but the basic philosophy is as above.
Did I explain clearly? :| Not sure lol
This is a relativistic, probabilistic way of framing objectivity. It is more difficult to prove and argue than what my design attempts. It is perhaps a useful technique and it is an interesting discussion, but we should move it to another thread. We can probably improve our designs by incorporating more discussion along these lines. But any way, I am overloaded right now just to try to get what I have already designed implemented. So for me K.I.S.S. is important right now.