If you or someone who can program can work on sending hashtable to textfiles (like your export option configuration) then there would be no need for save files and merging, etc. If you give user option to export tame and wild files with any amount they wish (example 20 tame and 20 wild files), then no merging or saving. Just compare the different tame and wild files for a collision/solution. This also eliminates any RAM issues.
Once you know the structure of a workfile it is not difficult to work with this type of file. I have created a small script in python as an example, anyone can modify it to suit their needs
I have been thinking for several days about removing the hashtables from the joining process, and when I finally finish it I find that it had already been implemented by JeanLuc
Now I have applied the same method to join several files in one process, reducing the total time quite a bit. My approach is to read all the files in the directory at the same time in HASH_ENTRY loop.
Although my code works, it is still in testing and it is possible that it has some bug.
I was testing "directory merge" function and RAM memory is quickly exhausted. I was thinking that I forgot to free temp HashTable in each reading iteration; but I changed the code and the problem remains The merged saveFile is 5GB, and in merge process takes up about 14GB of RAM.
I think the more obvious solution is to sort files from bigger to smallest when are merged; or use only small saveFiles.
On the other hand, the -ws flag I think is problematic when using -wsplit, generating larger files than necessary. Do you think it is interesting to separate the DP and the kangaroos into different save files?
As next improvements, I will work on improving the export of the DPs and the possibility of modifying the DP bits in a save file to reduce its size if we have chosen a too low DP value. It can also be interesting to remove from a save file the distances to share it without gifting the prize.
Post
Topic
BoardDevelopment & Technical Discussion
Re: Pollard's kangaroo ECDLP solver
by
patatasfritas
on 29/05/2020, 07:39:46 UTC
In "wsplit" mode, the time and count should be reset after every save? When I merged all split files the total time are incorrect.
All small pk110 solvers, I have just uploaded some calculated distinguished points DP=28 for pk #110. There are 1.4million DPs (2^20.42). The table includes x-coordinate (DP 28) together with the kangaroo type (0 for tame, 1 for wild).
Have a look at your tables, an if you have the same x-coordinate but with the different type (i.e im my table wild, but in yours tame) - so we immediately could retrieve the private key for #110
PS. The table includes only x-coordinates. Without distance it is useless, but helpful to find cross-collision with others.
What is the best way to compare TXT lists? I'm going to separate wild and tame in two different files, merge+sort with my own files and use uniq -d to find duplicates.
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
if you still haven't managed to export the workfile, here are the changes I made to the code to export the DPs to txt.
The X-coord is 128+18 bits and distance 126bits. The X-coordinate could be re-generated from the distance if necessary.
Jean_Luc, I want to look at the table which is saved to workfile. But as I understood, only 128bit are saved for X-coordinate and 126bit for distance (together with 1 bit for sign and 1 bit for kangaroo type).
Anyway, what is the easiest way to receive the whole table in txt format. I easily could read from binary file the head, dp, start/stop ranges, x/y coordinates for key. After that the hash table is saved with a lot of 0 bytes.... Can you just briefly describe the hash table structure which is saved to binary file?
MergeWork and LoadTable functions can help you to understand the saveFile structure. I'm trying export DP to txt (or other format), and syncronize over the network only new data.
I did a small patch to try to speed up kangaroo transfer and also changed the timeout to 3 sec. Could you try it and tell me if it improves something ?
I had same problem. Also, when the save fails, the GPU downs from ~500M to ~20M.