Next scheduled rescrape ... never
Version 1
Last scraped
Edited on 12/08/2025, 09:50:11 UTC
A completely new class of method for computing discrete logarithms

This paper seems to be about a specific case http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf but in reality, the method is generic. They talk about small discrete logarithms in the same vein that pollard rho has a complexity too high to handle large discrete logarithms…

Victor Shoup theorized that no generic discrete logarithm solving method could perform better than x½. This is indeed the complexity of Pollard Kangaroo and Pollard rho. But he also theorized than an algorithm with precomputation can yield at best a complexity of x which means the lower bound to break full sized secp256k1 is far less than the 2128 estimated security.

This paper is indeed diving in that class of faster speed at the expense of memory storage.

anyone to turn it’s mathematical description into implementation ?

Yes. That paper is the very basis of everything I was talking about numerous times, when saying that the DLP can be solved much faster.

You can also see it in practice whenever you hear anyone talking about precomputed data.

Note that reaching the 1/3 exponent complexity also requires doing the 2/3 exponent pre-work, so for secp256k1, if you want to reach that lower bound, you first have to do 2**170 group operations (and also storing a very large amount of data, depending on the desired DP frequency; in any case, much much more than the number of bits in all the storage drives in existence, raised to the power of 2).

And another thing is that that 1/3 + 2/3 refers to an optimal tradeoff between precomputed effort and solving effort, because there's nothing (except memory and time limits) stopping anyone from computing the full log, storing it, and solving any key in a single O(1) lookup step. And nothing stopping anyone from computing, let's say, half of the full log domain, and solving any key in 2 steps. And so on and so forth.
Original archived Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
Scraped on 12/08/2025, 09:20:35 UTC
A completely new class of method for computing discrete logarithms

This paper seems to be about a specific case http://web.archive.org/web/20250725043122/https://cr.yp.to/dlog/cuberoot-20120919.pdf but in reality, the method is generic. They talk about small discrete logarithms in the same vein that pollard rho has a complexity too high to handle large discrete logarithms…

Victor Shoup theorized that no generic discrete logarithm solving method could perform better than x½. This is indeed the complexity of Pollard Kangaroo and Pollard rho. But he also theorized than an algorithm with precomputation can yield at best a complexity of x which means the lower bound to break full sized secp256k1 is far less than the 2128 estimated security.

This paper is indeed diving in that class of faster speed at the expense of memory storage.

anyone to turn it’s mathematical description into implementation ?

Yes. That paper is the very basis of everything I was talking about numerous times, when saying that the DLP can be solved much faster.

You can also see it in practice whenever you hear anyone talking about precomputed data.

Note that reaching the 1/3 exponent complexity also requires doing the 2/3 exponent pre-work, so for secp256k1, if you want to reach that lower bound, you first have to do 2**170 group operations.