Next scheduled rescrape ... never
Version 2
Last scraped
Edited on 25/06/2025, 12:41:24 UTC
WTF are you talking about?

I have no idea what you're talking about.

Thus, the statement "DPs do not need instant lookup" is incorrect in this context because:

This implementation requires fast lookups for real-time collision detection.
Using memory-mapped files achieves both performance and persistence.

DPs don't need fast lookup. Maybe it's better to take a better look at the general algorithm. In essence:

1. Generate DPs.
2. Detect collisions.

These two steps are independent. There's no need to store anything in RAM, not even the generated DPs. They may come from a worker node, for example, as a file, before being thrown into a high-capacity storage system (which can be in the order of many billions of entries). For example, even using 64 bits of a DP as an index key is a no-brainer for any database and can store up to 2**64 entries with very, very fast lookup, no matter the actual storage subsystem or its size. When an insert fails, handle the potential collision (or the 64-bit conflict).

That's why I don't see the point of accelerating the DP lookup via RAM or bloom filters. They don't really help much, maybe just for educational purposes on very low-end puzzles.

The only issue is that finding DPs will be slow for high puzzles, so the database grows pretty slow. But for example, with a DP of 32, puzzles up to 192 bits are manageable with at most 2**64 entries. For Puzzle 135 and a DP of 32, only around 2**35 entries of storage are needed, which is 32 billion entries. This can fit on a single SSD.
Version 1
Scraped on 25/06/2025, 12:16:30 UTC
WTF are you talking about?

I have no idea what you're talking about.

Thus, the statement "DPs do not need instant lookup" is incorrect in this context because:

This implementation requires fast lookups for real-time collision detection.
Using memory-mapped files achieves both performance and persistence.

DPs don't need fast lookup. Maybe it's better to take a better look at the general algorithm. In essence:

1. Generate DPs.
2. Detect collisions.

These two steps are independent. There's no need to store anything in RAM, not even the generated DPs. They may come from a worker node, for example, as a file, before being thrown into a high-capacity storage system (which can be in the order of many billions of entries). For example, even using 64 bits of a DP as an index key is a no-brainer for any database and can store up to 2**64 entries. When an insert fails, handle the potential collision (or the 64-bit conflict).
Original archived Re: Mark1 - pollard rho implementation (38 minutes for 80 bits solving on CPU)
Scraped on 25/06/2025, 12:11:35 UTC
WTF are you talking about?

I have no idea what you're talking about.

Thus, the statement "DPs do not need instant lookup" is incorrect in this context because:

This implementation requires fast lookups for real-time collision detection.
Using memory-mapped files achieves both performance and persistence.

DPs don't need fast lookup. Maybe it's better to take a better look at the general algorithm. In essence:

1. Generate DPs.
2. Detect collisions.

These two steps are independent. There's no need to store anything in RAM, not even the generated DPs. They may come from a worker node, for example, as a file, before being thrown into a high-capacity storage system (which can be in the order of many billions of entries).