Post
Topic
Board Development & Technical Discussion
Re: Segregated witness - The solution to Scalability (short term)
by
Cconvert2G36
on 10/12/2015, 05:24:32 UTC
It sorts like a way to efficiently compress  the weight of blocks by removing something that's not needed when possible.

As merely one question, can we really consider the signature as something that's not needed?

I get that we're not _eliminating_ the sig, merely putting it in a separate (segregated) container, apart from the rest of the transaction. But any entity that wants to operate bitcoin in a trustless manner is going to need to be able to fully validate each transaction. Such entities will need the signature, right? Accordingly, such entities will need both components, so no data reduction for them, right?

Currently, relay nodes verify each transaction before forwarding it, do they not? If they are denied the signature, they can no longer perform this verification. This seems to me to be a drastically altered division of responsibilities. Sure, this may still work, but how do we know whether this is a good repartitioning of the problem?

Further, does this open a new attack vector? If 'nodes' are going to stop validating transactions before forwarding them, then there is nothing to stop them from forwarding invalid transactions. What if an attacker were to inject many invalid transactions into the network? Being invalid, they would be essentially free to create in virtually unbounded quantities. If nodes are no longer validating before forwarding, this would result in 'invalid transaction storms', which could consume many times the bandwidth of the relatively small number of actual valid traffic. If indeed this is a valid concern, then this would work exactly contrary to its stated goal of increasing scalability.

Note I am not making any claims here, but I am asking questions, prompted from my incomplete understanding of this feature.

Some of us are suffering from a sort of whiplash... we've been told (by some factions and their hangers-on) for months that raising max block size even to 2MB is highly dangerous for decentralization. But now, completely reorganizing some of the basic functions of the protocol, with a (somewhat unnecessary) requirement that there be no hard fork... has led us to the point where the same group with those concerns... is offering a fairly drastic solution that effectively raises the requirements for fully validating nodes to a 4MB(or 2?) max equivalent.

SegWit is widely agreed to be a net positive to incorporate into Bitcoin (especially if it can kill malleability problems), but the burden of vetting and testing should be much more involved than a one line patch like BIP102. My fear is that we will be into 2017 before anything is deployed, and we will continue to be without the base data that garzik's 102 would provide. And, the precedent that "hard forks r bad n scary" would still be firmly in place, and would be rolled out to stifle any possibility of future main chain capacity growth.