When he publish code (tests), maybe I'll take a look.
Have no fear! Here they come!
Let the benchmarking be done by pros, great minds don't bother - they simply deny everything!
But I already know what you're gonna say:
- the tests were biased
- you don't believe in the results, they are faked out
- you don't believe the results even if they are run by anyone else
- you don't believe the results even if you run the Scooby Doo method yourself
- the code is not valid
- the results are wrong
- we are not reading the results correctly
- there is a glitch in the way we are running the code
- Python may be rigged, it's not OK to rely on random
- this does not reflect real-life situations (you know what? you're right, but not to your benefit!)
So here goes nothing:
The
Scooby Doo methodology was clearly explained in the initial presentation, step by step:
https://bitcointalk.org/index.php?topic=1306983.msg65307396#msg65307396The prerequisite magic method was literally copy - pasted from your detailed exhibition over here:
https://bitcointalk.org/index.php?topic=5539190.0And then run for 500 and 16385 simulations.
Why 16385? Oh well, just so it's not 16384 or 16386.
Simulation 500: Sequential = 68008 checks in 0.069305s | Prefix = 95347 checks in 0.122294s
=== FINAL RESULTS ===
Wins:
Sequential: 150
Prefix: 317
Ties: 33
16385 simulations
=== FINAL RESULTS ===
Wins:
Sequential: 5744
Prefix: 9827
Ties: 814
Stats after the first 15k simulations (I feel like I'm repeating myself... does anyone else see the SNAIL in the bottom left corner? Should we maybe change the name of the method to better reflect the behavior?)

Is it so difficult for you to post an entire code?
AI:
Replacing the condition that checks whether the hash begins with the expected prefix, namely:
python
if not found_prefix and h.startswith(prefix_hash):
with a random condition like:
python
if random.randint(0, 5000) == 0: # critical magic update
introduces several significant consequences regarding both the algorithm's behavior and its efficiency/accuracy. Here are some key points:
Loss of the Filter Objective: The original intention was to use the hash prefix to reduce the search range within each block. This works because hashes exhibit pseudo-random properties, meaning that, on average, a certain portion of them will match the given prefix (assuming an adequately chosen prefix length). By replacing this check with a random condition, you abandon that relationship and instead select arbitrarily—roughly 1 out of every 5000 cases (about 0.02% probability)—regardless of any actual characteristics of the hash.
Inefficiency and Variable Performance: With the prefix-based condition, the goal is to quickly “flag” blocks that are likely to contain the target, thereby reducing the overall number of hash comparisons. A random condition, however, offers no guarantee that the correct block will be flagged, meaning that in most cases the algorithm will likely perform nearly the full sequential search. Additionally, the performance will become highly variable and unpredictable, since it depends entirely on chance in each run.
Risk of Incorrect or High-Latency Results: Due to the low probability of the random condition being met, it is likely that the designed optimization—i.e., the two-phase precise search—won’t consistently trigger. This can lead to a situation where, in many blocks, the search range isn’t reduced at all, resulting in many more comparisons during the later reverse scan phase and, consequently, longer execution times. Essentially, the advantage of the precise method over the sequential one may be completely lost.
Breakdown of Algorithmic Coherence: The algorithm is designed to leverage a property of cryptographic hashes—the occurrence of a specific prefix—to make informed decisions about where to limit the search. Replacing this informed heuristic with a completely random event undermines the internal logic of the process, affecting both its robustness and reproducibility. Metrics like the number of comparisons or execution time would no longer reflect any benefit from the heuristic filtering but would instead be dictated by arbitrary randomness.
In summary, substituting the prefix-based condition with a random check not only eliminates the intended benefit of efficiently filtering search blocks but also introduces uncertainty into both performance and accuracy. This change could translate into more overall comparisons, increased processing time, and the potential loss of any advantage the precise search method had over the sequential approach.