If N squared performance assumes a single giant transaction, why not a rule to limit the size of a transaction?
Seems like a no-brainer. I've been looking through the commits, but couldn't find it. I think classic has a defense along those lines.
Even so, a 1 Mb transaction can still take a while to validate. A hypothetical scenario described here:
https://bitcointalk.org/?topic=140078 states that a transaction of 1 Mb could take up to 3 minutes to verify. In reality, there was a roughly 1 Mb transaction that took about 25 seconds to verify described here:
http://rusty.ozlabs.org/?p=522. Anything that is over a few seconds is quite a long time in computer standards. Now, both of those scenarios are less likely to happen now since libsecp256k1 introduced significantly faster signature validation, but it is still vulnerable to such attacks. A maliciously crafted 1 Mb transaction could, in theory, still take 25 seconds or longer to verify.
The point is that I dont see any huge outcry if a tx is limited to say 1024 vins/vouts or some such number. If that avoids the N*N behavior it seems a simple way.
On the non-malleable txid basis, I cant find issues with T. Nolan's approach and any need for internal lookup tables, is a local implementation matter, right? And should limitations of existing implementations constrain improving the protocol?
just a question about tradeoffs.
If the position is that anything that requires retooling the codebase to handle the increased load in the future is not acceptable, ok, just say so. After all, I dont want to be shamed again by suggesting that the current code isnt absolutely perfect in all ways possible. Just want to know what the groundrule are. If the current codebase is sacred then it changes the analysis of what is and isnt possible. Who am I to suggest making any code changes to the people that are 100x smarter than me.
James