Post
Topic
Board Development & Technical Discussion
Merits 13 from 2 users
Re: The duplicate input vulnerability shouldn't be forgotten
by
gmaxwell
on 23/09/2018, 20:04:04 UTC
⭐ Merited by Foxpup (12) ,ETFbitcoin (1)
but you were recently someone who would attack other development teams trying to "improve the world" in their own way with even more harsh terms and toxic insults
It would be more productive if you were specific instead of vague. The vague allegations, devoid of context, just come across as toxic themselves-- a character attack, rather than a complaint about something specific that could be handled better.

I believe my suggested improvements to you above-- subscribe to the list, look for things things you can do to help-- were specific, actionable, well justified, and hopefully not at all personally insulting.

Quote
encourage users to mess with it, possibly even with some sort of bounty scheme, and once sufficient time has passed and lots of different people and companies in the industry have tried to break it, you merge it in
In general people ignore "test" things, they just don't interact with them at all unless the thing in question is on rails that will put them somewhere critical reasonably soon.

We see this pattern regularly, when people put up complex PRs and explicitly mark them as not-merge-ready they get fairly small amounts of substantive review, and then after marking them merge ready suddenly people show up and point out critical design mistakes. Resources are finite so generally people prefer to allocate them for things that will actually matter, rather than things that might turn out to be flights of fancy that never get finished or included. They aren't wrong to do it either, there are a lot more proposals and ideas than things that actually work and matter.  You can also see it in the fact that far more interesting issues are reported by users using mainnet rather than testnet.  This isn't an argument to not bother with pre-release testing and test environments: both do help. But I think that we're likely already getting most of the potential benefit out of these approaches.

"testing" versions also suffer from a lack of interaction with reality. Many issues are most easily detected in live fire use, sadly.  And many kinds of problems or limitations are not the sort of thing that we'd rather never have an improvement at all than suffer a bit of a problem with it.

Additionally, how much delay do you think is required? This issue went undetected for two years. Certainly with lower stakes we could not expect a "testing" version to find it _faster_.  Sticking every change behind a greater than two year delay would mean a massive increase in exposure from the lack of beneficial changes, not to mention the knock on cultural effects: I should have cited it specifically, but consider the study results suggesting that safety equipment like bike helments or seatbelts sometimes result in less safe behavior. Checklists can be valuable, for example, but have also been showen to cause a reduction in personal responsibility and careful consideration. "I don't have to worry about reviewing this thing, because any issues will be found by the process and/or will be someone elses problem if they get through".

As I said above, if we had a series of concerning issues being found shortly after releases it might be a good case for going more slowly, generally. But isn't the case. With only few exceptions when we find issues that are concerning enough to justify a fast fix they're old issues.  We do find many issues quickly, but before they're merged or released.

I've generally found that for software quality each new technique has its own burst of benefit, but continued deeper application of the technique has diminishing marginal returns. Essentially, any given approach to quality has a class of problems it stops substantially while being nearly blind to other kinds of issues. Even limited application tends to pick the low hanging fruit.  As a result its usually better to assure quality in many different ways by many different people, rather than dump inordinate effort into one or a few techniques.

Quote
You claim that moving slower would potentially result in less testing, which makes no sense.[...] I can't fathom how moving slower wouldn't help here.
I'm disappointed, I think I explained directly and via analogy as to why this is the case but it doesn't seem to have been communicated to you. Perhaps someone else will give a go at translating the point, if its still unclear. Sad