Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
kTimesG
on 21/04/2025, 16:23:45 UTC
snip~

Code:
=== FINAL RESULTS ===
Wins:
Sequential: 181
Prefix: 296
Ties: 23
Total Checks:
Sequential: 24959280
Prefix: 24189089
Average Success Rates:
Total Avg / Wins
Sequential(1 victory for each): 137896.57
Prefix(1 1 victory for each): 81719.90

My brain's fried, let's ai:

Quote
There’s nothing syntactically “wrong” with this code, but there are a few logical and statistical issues in the way you’re comparing the two search methods:

1. You’re averaging “checks” only over wins

Code:
avg_success_rate_sequential = total_checks_sequential / wins_sequential

This gives you the average number of checks taken only in the cases where sequential_search “won”, ignoring all the other runs (losses and ties).

That means if one method rarely “wins,” you’ll divide by a very small number and get a deceptively large average.

Conversely, if it wins almost every time, you’ll be averaging only its best performances.

What you probably want instead is the average number of checks per trial, across all trials, regardless of whether it “won,” “lost,” or “tied” on that trial:

2. “Wins” as a performance metric is crude

You’re counting a “win” whenever method A uses strictly fewer checks than method B on a single trial. But that ignores how much better it was. A method that “wins” by 1 check a thousand times but “loses” by 10 checks just once will look like a bad runner‑up even though it’s dramatically faster on average.

Better alternatives:

Compare the distribution of checks (mean, median, percentiles) rather than just counting wins.

Compute the mean difference in checks per trial:

3. Ties are effectively ignored in your averages

You increment results["ties"], but then never use that count in any of your averages or analyses. If ties are frequent, you’re throwing away a lot of information.