Search content
Sort by

Showing 20 of 70 results by farou9
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 20/07/2025, 19:13:07 UTC
anyone have a guide to use renting online gpus for computing using my own scripts ?
and what is the best gpus to rent , which companies to rent from
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 07/07/2025, 17:47:49 UTC
When making a bloomfilter for the x points of scalars 1...2**30 , what is the best hashing algorithm to use to get the lowest false positives candidates possible ?
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 16/05/2025, 15:18:47 UTC
Wait so what advantages does storing in ram gives exactly, i thought if you store some strings in RAM you can make an instance lookup in them .

Yes i understand what you are saying the best things we can do is either bloom filter or binnig2phase .

Side question, do you have a high end CPU GPU ?

RAM is "instant" (as in, no seek times) only if you know what address to access. There's also the CPU L1/L2/L3 caches, that have even lower latencies. If addresses are closer (like string bytes are) they'll usually end up in the L1 cache of a core, making you believe the speed is so good because data was in RAM, when some data was actually only read once, in bulk, and used word after word via the L1 cache and the CPU registers.

No, you can't store a bloom filter in L1, that shit's just a few hundred KB in size. Store in RAM whatever needs to have fast access (or is read often), save to disk whatever doesn't fit in RAM. Easy, right? That's where the magic is, the implementation. If you're looking to do this to break 135, though, no optimizations will help, you'll still need lots of millions of CPUs, on average. Or maybe just one.
So RAM is only helpful to store a bloom filter or small set of strings .

"Store in RAM whatever needs to have fast access" ,what do you mean by this we can't know what needs fast acces because every point of the db needs fast access ,or you mean the bloom filter
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 16/05/2025, 13:48:20 UTC
My main question is what dB is possible for us to make instant lookup!!

What you're looking for doesn't exist. Even if you have a single 256-bit public key in your table, if you want it to have an instant lookup you need a contiguous array of 2**256 items, where everything is zeroed out except for the location of your public key. Each location needs to have an exact number of bits for the value, in order to be able to compute the lookup location. You can do the math.  You might say that it's absurd, and you can simply compare any key with your stored key, but this doesn't scale. Once you add a second public key, you're already looking at having to choose between a O(1) (unfeasible), O(2) (linear comparisons) and a mixed O(~1) + O(log2) lookup algorithm. And so on.

The best you can get is already explained several times. You can get pretty close to instant by using binning (or a bloom filter), followed by a somewhat log-N lookup (or a small maximum amount of suffix check steps). Most of the items won't pass the filter / binning check anyway, which is already an "instant" operation. If you can find something faster than this, you've broken classical computing!
Wait so what advantages does storing in ram gives exactly, i thought if you store some strings in RAM you can make an instance lookup in them .

Yes i understand what you are saying the best things we can do is either bloom filter or binnig2phase .

Side question, do you have a high end pc
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 15/05/2025, 21:00:49 UTC
"not enough RAM in the Universe" , how much can we store in 8,16,32,64GB RAM ?

2,3 , BOTH needs to process the Db before entering lookup phase, right ?

I gave you all the calculations according to storing / querying 2**30 X coords and their scalars. Simply adjust the calculations if you want more or less points.

IDK what more you're asking for. Depending on how many points you'd like to store and lookup, you pick the right strategy. There is no magic fastest solution. You need to factor for how much you can fit in RAM, how much you read from the disk, and how many operations need to be performed.

For example, if you want to store 1 trillion points, and if you can fit a bloom filter for them in RAM, you'll need 128 GB of RAM and ~ 40 TB disk storage, just to build the lookup table. Then you can brag about having hundreds of zettakeys/second speed for solving Puzzle 80. Not sure you really grasp the physical limitations though...
It seems you think i have the delusion that even a 2*40 Db can solve 135 in reasonable time  😂.

It's just that i have an idea ,it uses db lookup but its not like bsgs itlooks random but it's not ,and it works but it's  unpredictable
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 14/05/2025, 21:30:08 UTC
So if we store sqrt2**30 , a space of 2**31 will need 2(sqrt2**30).
But if where are we storing them , ram or disk ?

If you have 2**30 stored items as the square root, then N is 2**60, not 2**31.

If you can't fit those 40 GB in RAM, you can use a bloom filter (only takes 128 MB, but has the hashing overhead) and do the disk/whatever lookup only for filter hits; or you can use those 8 GB of Phase 1 lookup in RAM, and read from disk/whatever for Phase 2, if the Phase 1 returns a positive hit.

You have all options on the table now, it should be clear what you can use. It's not a surprise that these options simply trade time with space. These options are ordered by speed (fast to slow):

1. O(1) lookup: not enough RAM in the Universe.
2. Binning and 2-phase lookup (40 GB of storage, O(1) average lookup, O(16) worst case)
3. Bloom filter (128 MB + key hashing overhead) followed by O(logN) lookup (or option 2 above).

Choose whatever fits well with your resources. You can also make the binning use more memory to reduce the number of false positives (less hits for doing Phase 2). That's what I'd do anyway.

Big warning here: binning can only work if you know before hand the maximum amount of items that an entry can have (to know how many bits are needed for the length). That's why the precomputation is required before building the lookup tables.
"not enough RAM in the Universe" , how much can we store in 8,16,32,64GB RAM ?

2,3 , BOTH needs to process the Db before entering lookup phase.




Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 14/05/2025, 13:27:57 UTC
Ok fast lookup is a lost cause then ,
So how does bsgs work ,doesn't it need to store sqrt of the space to make sqrt of space speed ,right?

It works just as you said.

If you have to find a key in a space of size N, BSGS allows to express the problem as x * y = N

x is the precomputed table size, y is the number of subtractions and lookups.

x + y is minimal when x = sqrt(N)

When x is any other value (a table that is smaller or larger than sqrt(N)) then x + y is no longer minimal. You end up with either using less memory (and more subs and lookups), or more memory (and less subs and lookups).

Here's a graph for N = 100.

https://talkimg.com/images/2025/05/14/UU961c.png
So if we store sqrt2**30 , a space of 2**31 will need 2(sqrt2**30).
But if where are we storing them , ram or disk ?
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 13/05/2025, 20:08:13 UTC
So what is the fastest method to lookup in the 2**30 ,or in another words what should we store them in to get the fastest lookup possible

You already have the answers. The fastest method is unfeasible, because not enough RAM exists in the Universe. The best you can get on this planet, in 2025, is to use a hash table, to get a logN lookup time.

You can speed that up moderately by binning the key prefixes into a sparse array. For 2**30 keys, the expectation (average) is to have a single entry for every possible combination of the first 30 bits of the key.

So build a 2**30 array with constant-size values, to be able to do a O(1) lookup in the first phase.

Since inevitably some items will have 2, 3, 4, or more suffixes, you need to use a pointer to the list of suffixes and their values. Assuming all the data was precomputed already, this can be kept in a compact bitmap.

Let's see how much data all of these would consume:

We have 2**30 prefixes array items, each using:
   - 30 bits for prefix
   - 30 bits for bitmap data pointer
   - number of items in bitmap (let's say this is a "max length" integer, 4 bits to allow up to 16 entries)
We have 2**30 suffixes, each having (256 - 30) bits.
We have 2**30 values to store (consecutive private keys, with a known base offset), each using 30 bits.

O(1) lookup array will need 64 * 2**30 bits = 8192 MB

For the bitmap data: a single entry in the compact bitmap requires 226 + 30 = 256 bits.
Total data size: 256 * 2**30 bits = 32 GB

Lookup algorithm (2-phases: O(1) for first phase, 16 steps worst case for second phase):

// Phase 1
prefix = publicKey >> 226.   // reduce key to 30 bits
dataPtr = prefixes[prefix]
if dataPtr == 0                      // key not found
    return
// Phase 2
for item in data[dataPtr]:
    if item.suffix != publicKeySuffix:
        continue                      // key does not match
    // key matches
    return item.value             // return private key
// if we get here, no suffix matched, so key is not found

There, this ideally uses 40 GB for everything. Not far from my original estimates.

In practice, it's gonna use more memory since it's much much faster to work with aligned units, like bytes and 64-bit addresses. This example is just an abstraction of a best-effort "fast lookup" for what you asked.
Ok fast lookup is a lost cause then ,
So how does bsgs work ,doesn't it need to store sqrt of the space to make sqrt of space speed ,right?
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 13/05/2025, 17:07:57 UTC
ok let say we have 60gb ram , and we computed the 2**30 x points in the ram will the lookup be instant (O(1)) ?

It won't be instant, because the keys are dispersed in range 1 to 2**256, so, as I already mentioned, this requires a dictionary data structure (map, tree, hash table, database are such examples).

The only way to have a O(1) lookup is, by basic principles, to know exactly the location that needs to be read. Since you have 2**30 keys, and each key is 256 bits, a truly O(1) lookup requires 2**256 * (sizeOfAssociatedValue) bytes, in order to do a direct O(1) lookup of any 256-bit key. You will have 2**30 locations that have an associated value, and (2**256 - 2**30) locations that are wasting space for nothing. But it is O(1) in complexity.

That's why other data structures like trees / maps / hash tables / storage databases are optimal: they are the perfect balance between minimizing storage and minimizing complexity down from O(n) to O(logN). O(1) is only practical if there is enough spare memory to keep all possible keys.

There are only around 2**70 bytes in all storage devices ever manufactured on Earth, so do you get the picture now?
So what is the fastest method to lookup in the 2**30 ,or in another words what should we store them in to get the fastest lookup possible ,i store them in 16**4 diffrent txt files based on the prefixes the points have so each file has 16k points but this only gives 45 lookups per seconds on my laptop , this might give faster on a better cpu .
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 13/05/2025, 02:38:19 UTC
how much ram do we need to store 2^30 points ,and what gpu or gpus do we need to maintain the speed of operations and lookup to be instant , like if our gpu can make 10^9 scalars ops per second how can maintain that speed and their lookup

There is no known algorithm that can lookup a non-linear key, in a dictionary structure, in instant (O(1)) time. A tree data structure retrieves keys and data in log(numKeys) steps (for example, 30 steps on average / worst-case, to find a value in a binary tree with 2**30 keys, depending on the type of the data structure). B-Trees (used by SQLite) can do it even faster, since a node can have hundreds of direct children. This is why you're most likely better off with storing such a DB on disk, and letting the DBMS take care of what's cached in RAM. Ofcourse, a bloom filter can take care of fast filtering such a lookup before it's needed.

The only way to know how much RAM is needed is to compute and store the table, because you can't know in advance the total entropy of all those keys, so it may require more or less space, depending on the clustering density of the key indexing (same prefix bytes = less used memory), and the data structure used for the lookup. Anyway, for 2**30 256-bit keys, each storing a 64-bit integer, you'd need around 40 - 50 GB at the minimum.

Running a lookup tree on a GPU spells trouble. Even if doable, the memory latency will kill any sort of performance. GPUs are compute devices, while lookups are a memory-intensive operation.
ok let say we have 60gb ram , and we computed the 2**30 x points in the ram will the lookup be instant (O(1)) ?

i have this algorithm that can solve puzzle 58 in (1/3)sqrt of the steps needed for bsgs to solve it but we need to the 2**30 DB , the algorithm is little bit nonsense but it worked but if we cant lookup up in instant o1 its not practical
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 12/05/2025, 13:32:02 UTC
how much ram do we need to store 2^30 points ,and what gpu or gpus do we need to maintain the speed of operations and lookup to be instant , like if our gpu can make 10^9 scalars ops per second how can maintain that speed and their lookup
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 10/05/2025, 00:45:05 UTC
Something I had forgotten to mention: while the prefix method does not represent a global improvement over the sequential method, this only applies in cases where 100% of the ranges are being scanned. However, if one is testing luck rather than exhaustive searching, the prefix method is always the better choice. If we exclude the option of scanning omitted ranges in extreme cases, the prefix method will achieve the objective faster, with fewer checks, and a 90% success rate.

The times when sequential search statistically matches the prefix method occur in its worst-case scenarios, which only represent 10% of instances. Therefore, since most users are not attempting to scan the full range 71, the best option remains the prefix method, as it provides the greatest statistical advantages with just a 10% risk.

In the unlikely event that one reaches that point in the process, those omitted ranges could always be saved for future reference in a text file.
do you have a script of your method ?

What script do you need?! Generating prefixes is like handling ≈60 million prefixes and brute-forcing them one by one until you get it. The game isn’t about scripts... it’s about hardware. You need a ton of GPUs. You have to find the key inside this number:

1,180,591,620,717,411,303,424

Read the number twice.
Easy ,study quantum mechanic and computer science and ....etc then create a fault. Tollerence quantum computer with 71 real qubits and voila!! You found it faster then a blink of an eye
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 10/05/2025, 00:04:11 UTC
Something I had forgotten to mention: while the prefix method does not represent a global improvement over the sequential method, this only applies in cases where 100% of the ranges are being scanned. However, if one is testing luck rather than exhaustive searching, the prefix method is always the better choice. If we exclude the option of scanning omitted ranges in extreme cases, the prefix method will achieve the objective faster, with fewer checks, and a 90% success rate.

The times when sequential search statistically matches the prefix method occur in its worst-case scenarios, which only represent 10% of instances. Therefore, since most users are not attempting to scan the full range 71, the best option remains the prefix method, as it provides the greatest statistical advantages with just a 10% risk.

In the unlikely event that one reaches that point in the process, those omitted ranges could always be saved for future reference in a text file.
do you have a script of your method ?
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 09/05/2025, 09:06:36 UTC
I have a theory regarding the remaining puzzles. I hope that those who find the key to the next puzzle will be able to delight me with bitcoins. Would you like to hear it?

I will give away 13% of all future finds  Cool

My theory is called Three Numbers. They are 4, 7, and 9. It is applicable in this puzzle.
 Those who don't believe can apply the two previous puzzles using these numbers, and everything will fall into place!
 For the script creators, I would also ask you to send me the working script in a private message if you create it.
 If you want, I can write examples for you and post them here.
What do you want us to do with 4,7,9
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 30/04/2025, 02:49:51 UTC
So they are all stored in 1 file ?

And how do I run it ,i can deal with c++ and py but c I have no idea how that works

Compile it and run it according to the README maybe. You can do what you want with the database afterwards, as long as you interpret the values as offsets to the base key of the range.

The code seems stable, looks like the only cases where results are wrong if you give it a range start that equals 512 (or N - 1534). I'll add some checks for these two edge cases. So you're good if you want to dump keys from 1 to 1.000.000.000 Smiley But you have to disable the check in main.c to do that, for now.

As for your first question: yes it dumps data into a database. I have a feeling you want something that can dump 1 billion points to disk instantly. However this code streams keys in non-sequential order, because of how the ranges are scanned in parallel.

I'm not aware of anything faster than an actual database if you really want the X values indexed in order to be able to later query it to get the scalar. The only thing that's faster is to not use a database and work in RAM.
But the question is what is structure of the database if its 1 file there is no advantage for lookup .
So they are stored randomly.

Well that much ram is expensive.
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 29/04/2025, 14:18:52 UTC
What is the main purpose of the code.

How does it store the X points exactly ?

Fast points generation of a full range.

You can see how the affine X coordinate is saved in db.c - what is unclear? The X bytes get stored into an index (little-endian bytes), and the key offset (from 0) as an integer value. This is what you wanted, no? If you want it saved another way, just adapt the code, and support the inevitable slowdowns.
So they are all stored in 1 file ?

And how do I run it ,i can deal with c++ and py but c I have no idea how that works
Post
Topic
Board Development & Technical Discussion
Re: PointsBuilder - fast CPU range points generator
by
farou9
on 29/04/2025, 13:15:13 UTC
Yaw!

I built a C library / demo app that demonstrates how libsecp256k1 can be used to do really fast batch addition and scan a given range from start to finish.

The resulted affine points can be streamed and consumed via callbacks, or this can be simply used as a crude measurement instrument to check your CPU's performance.

https://github.com/kTimesG/PointsBuilder

How it works?

- give it a range start, a size, and a number of threads to use. The rest is magic!

Only two EC point multiplications are only ever needed. For example, if you need to fully generate all the first 1 billion points starting from some scalar k (only 64 bit for this demo), and do it on 64 threads, you only need two initial point multiplications!

The code computes a constant points array, with G increments.

Batch addition is then handled directly using secp256k1 arithmetics.

The algorithm for my implementation of a batch addition is highly optimized - it is NOT the usual algorithm you might have seen elsewhere.

The goal of this demo is to show that it can run really, really FAST - in my tests, it's at least two times faster than the usual suspects that you may find in other projects, especially the ones in almost all VanitySearch projects and their clones.

The demo also allows saving the X coordinates into a database - this is not the goal of the code though.

You can run simple benchmarks by not specifying a database for storing results (because actually storing the results is several hundreds times slower than computing them in RAM).

Important note (forgot to add it in the GitHub README): the range start needs to be above 1024 (because otherwise the results will be incorrect - point doubling is assumed to never happen in the batch adder logic).

Note: this thread is self-moderated. Please no BS / AI / non-sense, except things that are related to this library.

What is the main purpose of the code.

How does it store the X points exactly ?
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 27/04/2025, 23:38:37 UTC
Hello everyone. Guys, please help me with the script. I made a script in Python that calculates addresses in order, but it calculates on the CPU. The speed is 200 Kkey/s. I have already tried everything. Tell me a script that would calculate on the GPU. Please, someone.
tell chatgpt to make you Cuda script
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 26/04/2025, 21:36:04 UTC
No harm in hoping.

I just want to first see how fast can it solve smaller bits 70 down

OK. I cooked up something that uses all cores to build all the points in a given range.

It can do 50 - 60 MK/s, if you have 64 cores. But the problem is the saving - SQLite can only ever write in synchronized mode (which makes sense, since it needs to check the index), and the best I got is around 300.000 inserts per second. Saving to files is a non-sense because then you won't be able to efficiently query the X value (which is the main point after all).

I'll upload the code tomorrow on GitHub so you can test it.
well I have 4 cores so I am technically cooked.
"300k inserts/s" I have a c++ script that gives 30k/s but the way it works is different and will take some days to finish the billion ,it opens 3200 per time until it finishes the hall range storing only the specific prefix opened files .


Code:
#include <iostream>
#include <fstream>
#include <sstream>
#include <iomanip>
#include <string>
#include <vector>
#include <unordered_map>
#include <chrono>
#include <filesystem>
#include <secp256k1.h>

namespace fs = std::filesystem;

std::string to_hex(const unsigned char* data, size_t len) {
    std::ostringstream oss;
    for (size_t i = 0; i < len; ++i)
        oss << std::hex << std::setw(2) << std::setfill('0') << (int)data[i];
    return oss.str();
}

void increment_privkey(unsigned char* privkey) {
    for (int i = 31; i >= 0; --i) {
        if (++privkey[i] != 0) break;
    }
}

int main() {
    const uint64_t num_scalars = 1'000'000'000;
    const size_t batch_size = 3200;
    const std::string output_dir = "x_values_by_prefix";

    fs::create_directories(output_dir); // Create the output directory if it doesn't exist
    secp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_SIGN);

    for (int batch_start = 0; batch_start < 65536; batch_start += batch_size) {
        std::unordered_map<std::string, std::ofstream> file_map;
        std::vector<std::string> prefixes;

        // Open 3000 files for this batch
        for (int i = batch_start; i < batch_start + batch_size && i < 65536; ++i) {
            std::stringstream ss;
            ss << std::hex << std::setw(4) << std::setfill('0') << i;
            std::string prefix = ss.str();
            prefixes.push_back(prefix);
            
            // Open the file in append mode, to ensure we don't overwrite existing data
            // If the file exists, it appends; if not, it creates a new file.
            file_map[prefix].open(output_dir + "/" + prefix + ".txt", std::ios::app);
        }

        // Reset private key to scalar = 1
        unsigned char privkey[32] = {0};
        privkey[31] = 1;

        secp256k1_pubkey pubkey;
        unsigned char pubkey_serialized[65];
        size_t pubkey_len = 65;

        size_t stored_count = 0;
        auto start_time = std::chrono::steady_clock::now();

        for (uint64_t i = 1; i <= num_scalars; ++i) {
            if (!secp256k1_ec_pubkey_create(ctx, &pubkey, privkey)) {
                std::cerr << "Invalid private key at scalar " << i << "\n";
                increment_privkey(privkey);
                continue;
            }

            pubkey_len = 65;
            secp256k1_ec_pubkey_serialize(ctx, pubkey_serialized, &pubkey_len, &pubkey, SECP256K1_EC_UNCOMPRESSED);
            std::string x_hex = to_hex(pubkey_serialized + 1, 32);
            std::string prefix = x_hex.substr(0, 4); // Get the first 4 characters of the x-coordinate (prefix)

            // Write to the appropriate file based on the prefix
            if (file_map.find(prefix) != file_map.end()) {
                file_map[prefix] << x_hex << " " << i << "\n";  // Append the x value and scalar index to the file
                ++stored_count;
            }

            increment_privkey(privkey); // Increment the private key for the next iteration

            // Log progress every 100,000 iterations
            if (i % 100000 == 0) {
                auto elapsed = std::chrono::steady_clock::now() - start_time;
                double seconds = std::chrono::duration_cast<std::chrono::seconds>(elapsed).count();
                double rate = i / (seconds ? seconds : 1.0);
                std::cout << "Processed: " << i << ", Stored: " << stored_count
                          << ", Time: " << seconds << "s, Speed: " << rate << " keys/sec\n";
            }
        }

        // Close all files after processing the batch
        for (auto& [_, fout] : file_map) fout.close();

        // Log batch completion
        std::cout << "Finished batch starting from prefix: " << prefixes.front() << " → Stored: " << stored_count << "\n";
    }

    secp256k1_context_destroy(ctx); // Clean up secp256k1 context
    std::cout << "All batches completed!\n";
    return 0;
}
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
farou9
on 26/04/2025, 20:00:57 UTC
And it might not even take 2^30 unique Randoms, we don't know.

The problem I want the solution for is the data base creation , hunting p135 is not practical for me I might try for 1 night after making the data base see if I get lucky.

I guess you really, really, really want that 1 billion points database. Fine, I'll do it a bit later today. But don't get your hopes up that such a DB increases your luck somehow.
No harm in hoping.

I just want to first see how fast can it solve smaller bits 70 down