Post
Topic
Board Altcoin Discussion
Re: The Ethereum Paradox
by
TPTB_need_war
on 17/02/2016, 08:25:54 UTC
Following up on that bolded commitment quoted above, cross-partition transactions even with asset transfers (e.g. a crypto currency, not scriptable block chains) seems to destroy the Nash equilibrium also, because the cascade of derivative transactions infects across partitions, yet the validators did not validate all partitions (i.e. not all transactions).

I don't follow you. The network won't accept an invalid transaction, just as bitcoin doesn't accept an invalid block.

The entire point of partitions is that not all full nodes are validating (verifying) all transactions.

Thus of course the full node that wins a block (in PoW, and analogously ditto in PoS or consensus-by-betting) is trusting the validators of other partitions to not lie to him.

If that full node had to validate every transaction in every partition, then there wouldn't be partitions any more. The entire reason to make partitions is because verification costs are too high when every full node has to verify every transaction. Partitions exist to aid scaling.

Partitions can also enable other features such as instant confirmations, but that is a tangential discussion and I am not going to give away all my design before I launch it.

Also in case my other point got lost in the sea of words upthread, my other key point is that (in Satoshi's PoW) if every full node has to verify every transaction, then all full nodes have the same verification costs, but full nodes have various levels of income because they have various levels of hashrate. Thus over time, mining must become more centralized because those with higher hashrate are more profitable due to verification costs. So eliminating verification costs for full nodes is eliminating one of the economic reasons mining becomes more centralized over time. I had also mentioned some of the other reasons in my video, e.g. propagation costs meaning not the cost of the bandwidth but the cost of mining on the wrong chain for longer periods of time relative to those pools with more hashrate who see the new block instantly when they produce it. In my design, I eliminate propagation costs in a clever way something similar to what Iota is doing (but without the aspect of Iota that I assert won't allow it to converge without centralized control and enforcement of the math model that payers and payee's employ).

Ditto I assume for Fuserleer and not wanting to give away his design for eMunie before he launches. Fuserleer has mentioned vaguely that he is using different data structures and that the one who commits a double-spend is then isolated into his own partition[1]. I don't know how he accomplishes this. Will be interesting to read his white paper. He has also said he is not using proof-of-work and rather some form of propagation and different nodes with different responsibilities. I await his white paper and can't pre-judge it, except to say I am very skeptical (but willing to be surprised).

Note in case it wasn't clear from my upthread posts, strict partition (no cross-partition transactions) for crypto coin (i.e. asset transfers) maintains Nash equilibrium. But cross-partition transactions for asset transfers does not maintain Nash equilibrium (unless using a statistical check as I am proposing for my design, and some may think this is dubious but my white paper will make the argument for it). And strict partitioning for scripts can't exist, because the partitions are violated by external I/O.



The way I understand it (and that might be defective) is that the other
partition has no way of validating the cross-partition tx.
If it could do that, ie. if there were an unified database, then there would not really be a partition.

We were writing our posts at the same time. When I clicked to post mine, yours had appeared. Yes it seems you understand the issue.



[1]
Correct with regard to your first scenario where 2 partitions never talk to each other in the future, you dont need to consider it.   If they do talk to each other in the future, and have to merge, this is where Bitcoin, blocks, POW and longest chain rule falls on its arse.  Only one partition can exist, there is no merge possibility so the other has to be destroyed.   Even if the 2 partitions have not existed for an extended period of time you are screwed as they can never merge without a significant and possibly destructive impact to ALL historic transactions prior to the partition event, so you end up with an unresolvable fork.  I feel this is a critical design issue which unfortunately for Bitcoin imposes a number of limitations.

CAP theorem certainly doesn't imply you can't ever fulfill C, A and P, as most of the time you can at least enough to get the job done.  What it does state is that you cant fulfill all 3 to any sufficient requirement 100% of the time, as there will always be some edge cases that requires the temporary sacrifice of C, A or P.  This isn't the end of the world though, as detecting an issue with P is possible once nodes with different partitions communicate, at which point you can sacrifice C, or A for a period of time while you deal with the issue of P.

If you structure your data set in a flexible enough manner, then you can limit the impact of P further.  Considering CAP theorem once again, there is no mandate that prohibits most of the network being in a state that fulfills C, A and P, with a portion of the network being in a state of partition conflict.  For example, if there are a network of 100 nodes, and 1 of those nodes has a different set of data to everyone else and thus is on its own partition, the remaining 99 nodes can still be in a state of CAP fulfillment.  The rogue node now has to sacrifice C or A, in order to deal with P while the rest of the network can continue on regardless.

All of this can be done without blocks quite easily, the difficulty is how to deal with P in the event of a failure, which is where consensus algorithms come into play.

Bitcoins consensus of blocks and POW doesn't allow for merging as stated, even if the transactions on both partitions are valid and legal.  

DAGs and Tangles DO allow merging of partitions but there are important gotchas to consider as TPTB rightly suggests, but they aren't as catastrophic as he imagines and I'm sure that CfB has considered them and implemented functionality to resolve them.

Channels also allows merging of partitions (obviously thats why Im here), but critically it allows a node to be in both states of CAP fulfillment simultaneously.  For the channels that it has P conflicts it can sacrifice C or A to those channels, for the rest it can still fulfill CAP.


Lets rewind a bit and look at whats really going on under Bitcoins hood.

Natural network partitions arise in BTC from 1 of 4 events happening:

1.  A node/nodes accept a block that has transactions which are double-spending an output present in another block
2.  A miner produces a block that conflicts with a block on the same chain height
3.  Network connectivity separates 2 parts of the network
4.  A miner has control of 51% or more

All 4 of these create a P inconsistency, and so the LCR (longest chain rule) kicks into action to resolve them. 

In the case of 1, miners can filter these against historic outputs and just reject the transaction.  If multiple transactions are presented in quick succession that spend the same output, miners pick one to include in a block, or they could reject all of them.  On the receipt of a valid block, the remaining double-spend transactions that are not in a block get dumped.  If a block with a higher POW then turns up, all nodes switch to that block, which may or may not include a different transaction of the double-spend set.

In the case of 2, this happens ALL the time.  Orphans cause temporary partitions in the network, but the duration between them is short enough that it doesn't cause any inconvenience.  Worst case you have to wait a little longer for your transaction to be included in the next block if the accepted block which negates the orphan block doesn't have yours in it.

In the case of 3, if the separation duration is short, see 2.  If its long and sustained, 1 of the partitions will have to be destroyed and undo any actions performed, legal or otherwise causing disruption and inconvenience.

In the case of 4, well, its just a disaster. Blocks can be replaced all the way back to the last checkpoint potentially and all transactions from that point could be destroyed.

There can also be local partition inconsistencies too, where a node has gone offline, and shortly after a block or blocks have been accepted by the network that invalidate one or more of the most recent blocks it has.  Once that node comes back online it syncs to the rest of the network and does not fulfill CAP at all.  The invalid blocks that is has prior to coming back online are destroyed and replaced. 

You could argue that this node creates a network level partition issue also to some degree, as it has blocks that the network doesn't, but the network will already have resolved this P issue in the past as it would have triggered an orphan event, thus I deem it to be a local P issue.

So whats my point?

In the cases of 1 or 2 there does not need to be any merging of partitions.  Bitcoin handles these events perfectly well with blocks, POW and LCR with minimal inconvenience to honest participants providing that the partition duration of the network is short (a few blocks). 

In the case of 3, which is by far the most difficult to resolve, the partition tolerance reduces proportional to the duration of the partitioned state, and becomes more difficult to resolve without consequence in any system, as there may be conflicting actions which diverge the resulting state of all partitions further away from each other.  These partition events will always become unsolvable at some point, no matter what the data structure, consensus mechanisms or other exotic methods employed, as it is an eventuality that one or more conflicts will occur.

The fact is that DAGs/Tangles and our channels have a better partition resolution performance in the case of event 3 as the data structures are more granular.  An inconsistency in P doesn't affect the entire data set, only a portion of it, thus it is resolvable without issue more frequently as the chances of a conflict preventing resolution is reduced.

Now, you haven't provided any detail on exactly how you imagine a data structure that uses blocks that could merge non-conflicting partitions, let alone conflicting ones.  In fact I see no workable method to do this with blocks that may contain transactions across the entire domain.  Furthermore, who creates these "merge" blocks and what would be the consensus mechanism to agree on them?  In the event of a conflict, how do you imagine that would be resolved?

When it comes to partition management and resolution where block based data structures are employed, Satoshi has already given you the best they can do in the simplest form.  Trying to do it better with blocks is IMO a goose chase and you'll get nowhere other than an extremely complicated and fragile system.