In my design, I have cross-partition transactions, but the way I accomplish this and maintain the Nash equilibrium is I entirely centralized verification, i.e. all transactions are validated by all centralized validators. This eliminates the problem that full nodes have unequal incomes but equal verification costs, thus ameliorates the economics that drives full nodes to become centralized. The centralized validators would still have the potential incentive to lie and short the coin. So the users in the system (hopefully millions of them) are constantly verifying a subsample of the transactions so that statistically a centralized validator is going to get caught fairly quickly, banned, and their hard won reputation entirely blown. Since these validations are done in a non-organized manner (i.e. they are randomly chosen by each node), then there is no viable concept of colluding to maintain a lie.
Regarding the last sentence, I take it that it refers to the extra validation
performed by the users? How do you ensure that the selection of the txs to be validated
is done randomly? And what incentives do the user nodes have to perform the extra validation at all?
Users are going to chose randomly because they have no reference point nor game theory incentive to base some priority on choosing non-randomly which transactions to validate. And I will probably make the transaction(s) they validate be those identified by the PoW share hash they must submit with their transaction.
Users have the incentive to validate, because it is an insignificant cost (perhaps some milliseconds of work every time they submit a transaction) and they have every incentive to make sure the block chain's Nash equilibrium isn't lost by a cheating centralized verifier. Also there may be a Lotto incentive where they win some reward by revealing a proof-of-cheating. There is no game theory by which users are better off by not validating, i.e. an individual failure to validate a subsample doesn't enable the user to short the coin. That was a very clever breakthrough insight (simple but still insightful).
Note my idea could also be applied to partitioning, so if centralized validators can't scale then I would still support partitioning (and even for scripts!), but I think partitioning and reputations of multiple validators muddles the design and also muddles the economics (i.e. I think the validators will end up centralized within a partition any way due to economics).
Note also that users can't normally validate constraints from the UXTO because they aren't full nodes. So they will be depending on a full node to relay to them the correct records from the UXTO, but these can be verified (by the non-full node users) to be correct from the Merkel trees.
So this is why I said maybe Ethereum should perhaps try to hire me, but I also wrote a paragraph upthread saying basically "do not hire me". Hope readers don't think I am begging for a job. Geez.