Next scheduled rescrape ... never
Version 1
Last scraped
Edited on 25/06/2025, 16:20:54 UTC
DPs don't need fast lookup.

If the idea is to find quick solutions, you need to include more DPs in your searches, so I assume you need more frequent queries. And to keep both working without creating bottlenecks, you need fast access and compact databases, because not everyone has hundreds of terabytes of storage or supercomputers.

What happens is that you extrapolate everything to environments that go hand in hand with Google, Amazon, and Microsoft, and these scripts try to solve things faster with solutions within reach of the average user.

A separate thread works just fine. The only bottleneck would be having more DPs generated than the insertion/lookup can handle. However when you only have just a few DPs per second / minute, while the database can do several hundreds of thousands/s anyway, not sure where's the benefit of speeding that up another 10.000 times (e.g. having it in RAM). At some point, the RAM won't be enough anyway.

Don't need to complicate things so much.

PS: I'm not marketing cloud services, neither did I wrote SQLite or something. But why reinvent the wheel when there are already answers to solved problems? For example, mapping files to RAM and so on, this is not solving anything when you have huge files, it just decreases performance. And SQLite can very well just work off from RAM itself, besides having basically super-optimized strategies for page caching and low-level stuff to access the data.
Original archived Re: Mark1 - pollard rho implementation (38 minutes for 80 bits solving on CPU)
Scraped on 25/06/2025, 15:50:57 UTC
DPs don't need fast lookup.

If the idea is to find quick solutions, you need to include more DPs in your searches, so I assume you need more frequent queries. And to keep both working without creating bottlenecks, you need fast access and compact databases, because not everyone has hundreds of terabytes of storage or supercomputers.

What happens is that you extrapolate everything to environments that go hand in hand with Google, Amazon, and Microsoft, and these scripts try to solve things faster with solutions within reach of the average user.

A separate thread works just fine. The only bottleneck would be having more DPs generated than the insertion/lookup can handle. However when you only have just a few DPs per second / minute, while the database can do several hundreds of thousands/s anyway, not sure where's the benefit of speeding that up another 10.000 times (e.g. having it in RAM). At some point, the RAM won't be enough anyway.

Don't need to complicate things so much.