When he publish code (tests), maybe I'll take a look.
Have no fear! Here they come!
Let the benchmarking be done by pros, great minds don't bother - they simply deny everything!
But I already know what you're gonna say:
- the tests were biased
- you don't believe in the results, they are faked out
- you don't believe the results even if they are run by anyone else
- you don't believe the results even if you run the Scooby Doo method yourself
- the code is not valid
- the results are wrong
- we are not reading the results correctly
- there is a glitch in the way we are running the code
- Python may be rigged, it's not OK to rely on random
- this does not reflect real-life situations (you know what? you're right, but not to your benefit!)
So here goes nothing:
The
Scooby Doo methodology was clearly explained in the initial presentation, step by step:
https://bitcointalk.org/index.php?topic=1306983.msg65307396#msg65307396The prerequisite magic method was literally copy - pasted from your detailed exhibition over here:
https://bitcointalk.org/index.php?topic=5539190.0And then run for 500 and 16385 simulations.
Why 16385? Oh well, just so it's not 16384 or 16386.
Simulation 500: Sequential = 68008 checks in 0.069305s | Prefix = 95347 checks in 0.122294s
=== FINAL RESULTS ===
Wins:
Sequential: 150
Prefix: 317
Ties: 33
16385 simulations
=== FINAL RESULTS ===
Wins:
Sequential: 5744
Prefix: 9827
Ties: 814
Stats after the first 15k simulations (I feel like I'm repeating myself... does anyone else see the SNAIL in the bottom left corner? Should we maybe change the name of the method to better reflect the behavior?)
