Next scheduled rescrape ... never
Version 1
Last scraped
Scraped on 12/08/2025, 10:26:07 UTC
No, as far I understand, in the case of http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf the complexity is decreased by the square of the size of the table. And anyway, the challenge indeed involve computing several discrete logarithms so reusing precomputation would be worthwhile isn’t it ?

In theory, yes. In practice, the algorithm you use may or may not allow you to reuse the precomputed data, because you have to factor in the fact that the DLPs are in a higher and higher range, and the data you precomputed might have only been optimal up to a limiting upper bound (otherwise, it would have been inefficient in solving the very first DLP).

To sum it up: this is only useful if one wants to solve a large amount of DLPs, up to some upper bound. For example, all the puzzles up to 120 bits, in absence of having any pubKeys already, can use a precomputed data that allows ANY 120 or lower bits key to be found. For example, it can solve Puzzle 1, 2, 3, 4, .... 70, 71, 72, .... 115, 116... up to 120 bits. But it will have a 50% chance of failing to find a 121-bits key, a 75% chance of failing to find a 122 bits key, etc. because the new keys may be outside the precomputed domain and finding them may or may not be possible.

It is useless to do it for puzzles that are unsolved and have the pubKey exposed, because in THAT case, the most efficient algorithm is to simply merge the precomputing with the solving, to obtain the minimum effort (e.g. 1/2 + 1/2 exponents, times whatever constant factor + any overheads).
Original archived Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
Scraped on 12/08/2025, 09:56:32 UTC
No, as far I understand, in the case of http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf the complexity is decreased by the square of the size of the table. And anyway, the challenge indeed involve computing several discrete logarithms so reusing precomputation would be worthwhile isn’t it ?

In theory, yes. In practice, the algorithm you use may or may not allow you to reuse the precomputed data, because you have to factor in the fact that the DLPs are in a higher and higher range, and the data you precomputed might have only been optimal up to a limiting upper bound (otherwise, it would have been inefficient in solving the very first DLP).

To sum it up: this is only useful if one wants to solve a large amount of DLPs, up to some upper bound. For example, all the puzzles up to 120 bits, in absence of having any pubKeys already, can use a precomputed data that allows ANY 120 or lower bits key to be found. For example, it can solve Puzzle 1, 2, 3, 4, .... 70, 71, 72, .... 115, 116... up to 120 bits.

It is useless to do it for puzzles that are unsolved and have the pubKey exposed, because in THAT case, the most efficient algorithm is to simply merge the precomputing with the solving, to obtain the minimum effort (e.g. 1/2 + 1/2 exponents, times whatever constant factor + any overheads).