I'm trying to make a c++ script to compute the points between scalars 1 to 1 billion and store the x value and the scalars in txt files based on the first 4 characters of x , but i have two problems that makes the process very slow .
1. libsecp256k1 Can't keep adding g to itself repeatedly ,it cant handle the same point being added
2. The core problem that slow it down , we can't open 16^4 files in the same time we are limited to 3200 , and we don't know what the next point prefix will be so we can't choose which files to open ,and opening only 1 file at a time is very very slow , the speed is 1000 point per 5s
Depending on how many cores you have (1 to 100), and with fast storage (at least an SSD), this can be done efficiently in something like between 10 seconds and 3 minutes.
Can I ask what is the purpose of this? The code is kinda complex, to be honest, and involves dividing work across all cores, adding points in Jacobian form, inverting the Z product a single time, and then finishing up the individual additions by down-sweeping inverted Zs and normalizing back to affine points.
You'll also need a ton of RAM (or less, and split the whole range, in exchange for a longer time to finish).
i don't know how does bsgs work but i think it the same core idea , i create a data base of the first 1 billion point and store the x value actualy the scalar doesn't realy matter but it can speed things up , anyway we keep adding 1 billion to the start of the range and subtract that point from the target by adding the negate of it if we are close to our target in 1 billion away the point the subtraction yield will be in the DB anyway you understand this already , but the purpose of dividing the points to diffrent files is to speed up the lookup part , looking for a point in 1 billion line takes 200 seconds. But if the file only contains 15k points that won't take a second
.
I only have (Intel i5 4 cores ) 8gb ram