Post
Topic
Board Development & Technical Discussion
Re: Revised swarm node proposal
by
Realpra
on 06/01/2016, 06:01:53 UTC
Quote
Today the order is random and without any effect.
No the transactions are required by the consensus protocol to be dependency ordered. Any other ordering which does not meet this criteria potentially creates some super-linear costs in block validation.
My bad.

However once TXs are sorted by hashes and you have lookup tables for them it would not be linear to make that look up anymore and removing this dependency ordering would be fine.

Quote
Quote
Each node will have an identity such as an ECDSA-AES public key over which all communication and chunk signing occurs.
A nit: "ECDSA-AES" isn't a thing...
Sorry it should have said ECDH-AES (Elliptic Curve Diffie Hellman key exchange using AES symmetric encryption).
Quote
but critically introducing persistent trusted "identity" into the bitcoin protocol would be a radical regression against the insights that Bitcoin brought to the first place. I believe it is neither useful nor desirable to do so, and that it may present considerable risks.
Its not trust as trust between humans - it is just anti-DDOS between machines using the same hashing that Bitcoin uses same as hash cash etc. etc. etc..

Quote
Quote
2. Proof of work burn done for that specific key.
Creating a real monetary costs for operating a node is not a move which seems likely to result in a meaningfully decentralized system.
Same as mining real cost doing "nothing".
100% the same thing, nothing new.

For honest nodes this cost would be minor and onetime only.

Only attackers that keep getting blocked would need to keep burning hash power.

Quote
Quote
Using the table and another node we request and get the unknown TX.
Excepting spam and mistakes (e.g. key loss) all outputs that are created are eventually consumed. A single transaction can create many outputs, and they usually do create multiple ones. They also usually consume multiple inputs.  In the approach presented this fact will result in most transactions being transmitted to most nodes in any case, losing much of the intended advantages.
 
I thought about this, but it is actually not that bad.

You would process on avg. 512 transactions even if you had to get them all and then all their outputs say avg. 3 that would only be 0.68 mb per block. It is trivial to poll for this amount of data.

So yes all nodes would need to get maybe 2048 TXs, but not 1 million. No advantages lost Wink

Quote
Now, instead of sending just a block, you set a part of a block but need to receive every transaction those transactions spend (which will be a considerable multiple, and almost all of them will need to come from remote nodes-- which, if they're offline, will leave you would be stuck and forced to either reject the longest chain temporarily (and potentially begin mining a fork) or accept the block without even verifying your part).
You would always have many backup connections for each chunk range.

Getting stuck would only happen if you have no connection, can't be helped.

Quote
I think its unfortunate that you've dismissed segwitness without, apparently, really understanding what it provides. Segwitness sets up a framework for the kinds of efficiency gains you're hoping to achieve here but without the same problems and without depending on introducing identified bonded nodes.
I understand it quite well.
You take out the signatures from the TXs in a seperate merkle tree which allows 1mb blocks to hold 4 mb tx data - but only if done in a hacky way otherwise a hardfork is needed.
I don't believe in hacky solutions for 5 billion dollar systems so it should be done as a nice hard fork or not at all.

Since you're hard forking anyway just raising the limit to 4mb would do the same thing.

If you're doing full validation seg witness doesn't help because you still have to get the signatures, you're still getting 4mb data, it is not magic.

It DOES solve signature malleability, which is nice, but it was never a major concern unless you were a bad programmer at mt gox.
Did I miss something? Seems like a very expensive way to solve malleability and not much else.