Hi,
Thank You for posting link and description. Have You ever gentelmans thought about creating 110bit of DPs instead of 80bit like in recent challange? Is it even possible? How big shuld be work file for -m 3 in that case? It would take aprox 15 Years with 10k CUDA core card. Crazy idea, but thereafter ~33.000.000 possible ranges for #135 to check. Economicaly, there is no point in such a move, but I think that there are some romantic in here

In theory, how long does it take to check new public key in precompiled 110bit work file? @Etar, how long does it take You to find new public key with 80bit precompiled workfile?
https://cr.yp.to/dlog/cuberoot-20120919.pdfOnce you read and understand what it is written in that paper you should then understand what is the purpose and usefulness of precomputed DPs. Basically, it's the result of storing the output of
many multiple solves on top of each other to help with future solves, it is not a magical way for simplifying the
first solve, which is actually slower than usual if the purpose is building the DP database. And splitting the range to solve smaller ranges just increases the overall complexity, it does not reduce anything, on the contrary.
So, you can't really precompute #135 DPs if you can't first solve #135 in reasonable time, and repeat this process a lot of times.