Search content
Sort by

Showing 20 of 35 results by Valera909
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 25/05/2025, 18:23:05 UTC

I get where you're coming from. It's definitely tough to get everyone on the same page, especially when each person has their own theory and approach.

But I'm not trying to enforce centralized control or reach some global agreement.

What I'm suggesting is more like an open infrastructure with a few basic principles. Not rules, just a framework. Some people have idle GPUs, others are strong coders, and some have solid heuristics. If we can bring those strengths together in a flexible system, everyone benefits.

Even if some prefer to work solo, that's completely fine. This doesn't interfere with that. It's simply an option for those who want to collaborate more efficiently.

If even a small group of three to five people are willing to try a structured approach, that's already enough to form a working foundation. From there, it can grow naturally if it proves useful.

yea i like the idea of "shared pool" like btcpuzzle.info(not really shared) but everyone got equivalent reward based on their work, what do yall think?

You mean like ttd71 pool ?


Mistakes as a Philosophy of Excuse   A Classic Mental Trap That Kills Growth
We all break, fall, and mess up in this world. That’s normal, no argument there. The problem isn’t the mistakes themselves, but how we perceive them and what we do with them.

Instead of honestly admitting, “That was a mistake, it held us back, and I could’ve done better,” many turn their failures into pretty stories that fit a convenient narrative:

“It was a lesson from fate, it had to be.”

“Without that, I wouldn’t have grown, it made me stronger.”

“The universe decided it that way   so it’s fair.”

“I got stronger because I fell.”

Sounds cool and inspiring, right? But it’s not objective truth   it’s an excuse. A comfy excuse for not wanting to change the system, not wanting to change ourselves, not wanting to take responsibility for real progress. It’s a philosophy of powerlessness disguised as wisdom.

Why Does This Happen?
Admitting a mistake is a dead end means accepting not only failure but pain, responsibility, even loss. It’s hard, scary, painful.

Way easier to tell yourself and the world: “Everything happened as it should, I just didn’t get it then,” and keep floating on inertia, changing nothing.

This philosophy lifts the burden of searching for real, hard solutions and actions   we just “survive” failures and pretend it’s “experience.”

Why Is This “Survivor Philosophy” Dangerous?
It turns failure into routine, making mistakes something “mandatory” that just repeats over and over.

It blocks growth because it locks us in a loop: “suffering   acceptance   resignation.” Instead of breaking the cycle, we blend into it.

People stop fighting for real change, stop seeking alternatives   they just learn to “survive” defeats, glorifying them as inevitable experience.

Practical Consequences in Crypto-Puzzles and Life
This approach is not just philosophy but a real trap that slows progress in our projects and efforts.

Many working solo and stuck in “survivor philosophy” fear to unite.

They fear showing vulnerability, imperfection, fear being “exposed” by the group.

Instead of seeing union as a chance to grow, learn, and expand possibilities, they cling to the illusion of full control   even if it’s in isolation.

Why Collaboration Pays Off Despite Fears
Increased power and speed of progress.
Combined resources and brains push the work faster. In crypto-mining and puzzle-solving, collective effort stabilizes results and raises chances.

Resource optimization.
Splitting costs and efforts   technical, mental, emotional. No need to reinvent the wheel solo.

Risk reduction.
One solo failure means collapse; in a team, one’s mistake is covered by others.

Innovation stimulation.
Diverse views and skills open new paths a solo player might never see.

Technical side
It’s crucial that technical mechanisms to measure contribution are tuned not only to “power” (hardware, compute) but also to overall input: ideas, optimizations, process organization, communication, and more.
A properly built contribution system lets every participant see their effort valued fairly and adequately. This eases internal conflicts and keeps motivation high.
Designing the system so it benefits everyone means it’s not about hardware, but about the team, the common movement, and synergy.

How to Fight Fears and Doubts
Transparent result distribution mechanisms (like smart contracts) remove fear of theft and injustice.

Clear role division lets everyone give their best without chaos and bureaucracy.

Small pilot groups where collaboration can be tested are great for building trust.

What to Do with the Philosophy of Excuses and Why You Need to Break It
Stop glorifying failures and excusing them with pretty words.

Honestly admit: “I messed up, need to change approach.”

Focus on real actions, not on pretty stories.

Learn from mistakes as concrete data, not as fate-driven lessons.

Bottom Line
If you want to just survive and awkwardly fumble solo   forget this “survivor philosophy.” It only gives you a comfy trap.

Real growth starts where you are honest with yourself and others, where mistakes aren’t reasons to shut up or hide but signals to change.

Unite, build teams, build trust through transparency and honesty. Stop fearing vulnerability   it’s what makes us stronger.

Because true power isn’t in fighting the world alone but in the collective breakthrough beyond the usual.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 25/05/2025, 13:58:11 UTC
Dear people,

Is there a pure CUDA program that just brute's Keys  in the most efficient way possible, no unnecessary complexities introduced or some algo based program too that is optimal.

A solid meme, no doubt. You’ve already eaten a couple of cakes, got the experience. But now you know for sure it wasn’t a solo game — it was a coordinated, organized effort.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 25/05/2025, 11:58:58 UTC
Maximum efficiency and faster puzzle solving.

Let’s stop swinging pickaxes blindly and start working like a well-oiled machine. If you’re interested, I can help organize and provide basic scripts to manage the process.

I don’t think you can reach any consensus on this topic. People are a little too sure about their idea of what’s real and what’s not.


I get where you're coming from. It's definitely tough to get everyone on the same page, especially when each person has their own theory and approach.

But I'm not trying to enforce centralized control or reach some global agreement.

What I'm suggesting is more like an open infrastructure with a few basic principles. Not rules, just a framework. Some people have idle GPUs, others are strong coders, and some have solid heuristics. If we can bring those strengths together in a flexible system, everyone benefits.

Even if some prefer to work solo, that's completely fine. This doesn't interfere with that. It's simply an option for those who want to collaborate more efficiently.

If even a small group of three to five people are willing to try a structured approach, that's already enough to form a working foundation. From there, it can grow naturally if it proves useful.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 25/05/2025, 10:04:00 UTC
Topic: Organizing a Community for Collaborative Private Key Search - Idea

Hey everyone!

I see many people are randomly hammering away at crypto puzzles however they can, but that’s not the way to get results. I propose building a community where power and tasks are distributed smartly and transparently, not “every man for himself.”

Key Ideas:
Roles by specialization:

Analysts propose hypotheses and strategies.

Coders write and optimize tools.

Heavy compute providers run intense calculations.

Validators check results.

Resource managers coordinate power and tasks.

Power allocation, not selling:

Compute power isn’t sold or traded.

It’s allocated to specific tasks based on priorities and skills.

Transparency and control who contributes what and where it goes.

Process:

Submit a power request.

Manager allocates resources to active tasks.

Continuous monitoring of workload and efficiency.

Benefits:

No chaos or wasted effort.

Everyone contributes according to their strength.

Maximum efficiency and faster puzzle solving.

Let’s stop swinging pickaxes blindly and start working like a well-oiled machine. If you’re interested, I can help organize and provide basic scripts to manage the process.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 24/05/2025, 12:56:00 UTC
They complained that I was using AI. Now they will get 100 of them who write with "—". The only thing left is to run away from here if it annoys you. There is no way you can get away with AI writing.  Grin
The problem isn't that someone uses '—'  😏, But in coming up with something new using '—'
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 24/05/2025, 11:11:58 UTC
AI-generated
23%

Human-written
77%

One of the big reasons sane people don't post here (or retire) is that it's a waste of time to fight against idiocracy (which is exactly where society is heading if everything people are able to do is 2 things: ask AI about shit they're too dumb to learn, and ask AI whether something was written by AI). This is getting too boredom to hear every 2 sentences that "all your posts are AI based". I guess people will eventually lose their brains all together because of that shit. Fuck education and everything related to it.


Listen, AI is just a tool. Knowledge by itself is nothing more than raw text files—like school textbooks in plain TXT format without any meaningful connections or context. Without properly aligning and interpreting that knowledge, you won’t build a neural network capable of solving tasks smoothly or respectfully.

Basically, having a tool that handles complex tasks doesn’t automatically make you dumb for using it. On the contrary—you level up to a higher plane of abstraction, shifting away from the technical grind to managing meaning. You don’t just push buttons—you create and interpret concepts, letting the tool handle the messy technical parts.

Looking at the bigger picture—the world built on debts and deficits will always grow exponentially into chaos, spawning endless conflicts and crises. But within that chaos lies the chance to see deeper and build something truly your own, outside the dumb consumer cycle and “idiocracy.”

So AI isn’t the end of intellectual evolution—it’s the start of a new layer you have to learn to read and use. The key is not to become a script in someone else’s system, but to stay the architect of your own abstractions.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 24/05/2025, 11:00:43 UTC
Let’s assume we know the private key k lies within a limited range:

k ∈ [-1000, +1000] mod N
...
That’s 1 bit of information recovered via curve symmetry and limited search space.
That’s useful:

In biased brute-force attacks

This only works when you're trying to find the private key of a known public key. For address puzzles, you can't shift the search interval. And for public key puzzles, brute-force is not used because exponentially faster algorithms exist (and some already take advantage of the curve symmetry).

That's a solid and worthy response — agreed.
However, curve symmetry alone reduces the search space by at least 2×, even before taking into account the full stack of optimized algorithms.

As for determining whether a point is the interpreted negative (i.e., reflection) on the curve,
is there a faster method than direct computation for checking it?

Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 24/05/2025, 08:58:01 UTC
AI-generated
23%

Human-written
77%

One of the big reasons sane people don't post here (or retire) is that it's a waste of time to fight against idiocracy (which is exactly where society is heading if everything people are able to do is 2 things: ask AI about shit they're too dumb to learn, and ask AI whether something was written by AI). This is getting too boredom to hear every 2 sentences that "all your posts are AI based". I guess people will eventually lose their brains all together because of that shit. Fuck education and everything related to it.

👋 Hey, thanks for the detailed response — you're absolutely right in the formal mathematical sense:

In elliptic curves over a finite field, there's no such thing as “distance,” “direction,” or “sign” in the usual linear sense.

The point at infinity is an abstract group identity element, and the coordinates of curve points are elements of
𝔽ₚ,
where the notion of "sign" or "closeness to zero" doesn't apply in the conventional way.

But that’s not what I meant. Let me clarify:

🔄 What I actually meant (and why it works)
Let’s assume we know the private key k lies within a limited range:

k ∈ [-1000, +1000] mod N
(This can happen in real-world scenarios — e.g., weak key generation or partial leakage.)

Now suppose we start from:


0⋅G
...and increment step-by-step with:

diff
Копировать
Редактировать
+1⋅G, +2⋅G, +3⋅G, ...
...comparing against a known public key P. If at some step i we reach -P instead of P, then:

mathematica

i⋅G = -P
⇒ i⋅G = (N - k)⋅G
⇒ i ≡ N - k  (mod N)
⇒ k ≈ N - i
Since i is small, k must be near N, which means it's “negative” in the limited range [-1000, +1000].

So, even though we don’t recover the exact value of k, we can infer its sign in that range.

That’s 1 bit of information recovered via curve symmetry and limited search space.

🧠 Why this matters
I'm not claiming you can infer point “direction” or “orientation” in the general case.

I'm saying: If you know that k is small (near 0), and you're doing incremental checks, you can determine whether it’s k or N - k that maps to the target point — i.e., you recover the sign of the scalar.

That’s useful:

In biased brute-force attacks

With weak RNGs

Or when targeting skewed key distributions

⚙️ Technically you’re right, but...
I wasn’t trying to redefine elliptic curve theory.

I was describing a practical case where a directional relationship between P and -P becomes observable because we’re restricting ourselves to a small enough scalar space and checking sequentially.

If that came across differently — fair enough, maybe I worded it poorly.

Still, in the dirty world of broken keygens and leaky bits, that one bit can be golden.
🏭🌀 We work in the rust and shadows — not everything is clean math.

❓Question:
Aside from using Y-parity (which obviously doesn’t help here since positive and negative values are evenly distributed),
do you know of any other properties or heuristics that might leak information about scalar direction or sign — in similarly constrained or biased scenarios?
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 23/05/2025, 11:41:05 UTC
Is it feasible in a non-constant-time implementation to detect the sign of a point from timing differences?

What are the main factors that could cause timing variations in these operations (coordinate inversion, branching, etc.)?

What best practices exist for measuring and exploiting such timing differences?

Are there known vulnerabilities or examples from popular libraries related to this?

There's no such thing as sign of a point. Points are pairs of coordinates.
There's no such thing as sign of a point's coordinate. Modular arithmetic doesn't have signs.
Best practices: don't run variable-time code if you don't want to expose everything your code computes, like any secret values. It's a critical vulnerability. If it's not required to be secure, use variable time code to gain more speed.
Known vulnerabilities: timing variations allow to completely retrieve the processed values.
Examples: a lot, OpenSSL had a while loop that ended prematurely. Some guy managed to retrieve private SSH keys remotely by timing server handshake responses.

Seems like people get more and more crazier directly proportional to the BTC price.


hanks so much for the detailed breakdown! 🙏

I’ve got a basic grasp of modular arithmetic, but initially interpreted the situation differently — like, the point −𝑃 −P kinda feels “greater” than 𝑃
P, especially when you look at it through the lens of inversion over the 𝑦 y-axis. 📉📈

Now here’s the spicy bit 🌶️:
If you start walking from − 𝑃 −P, incrementing the private key by 1 each time (i.e., checking for some  𝑘 k where 𝑘⋅𝐺=𝑃 k⋅G=P), you'll actually hit the point 𝑃 P relatively quickly — because they’re connected through the curve’s group order.
But if you go the other way — starting at 𝑃
P, trying to reach −𝑃
−P via +1 steps — you'll have to walk almost the entire group order. 🚶‍♂️🔁

That means direction matters, and that’s something we might be able to leverage. This method gives a hint about the sign or orientation of the point. 👀🧭

🤔 I'm wondering — are there any libs or implementations that let you determine which of two points is “closer to 0” or gives clues about their directional relationship?

💡 Also, are there any side channels we can sniff — like timing differences in signature verification, subtle quirks in point serialization, byte structure anomalies, or even differences when decompressing points — that could leak info about the point’s sign or help narrow the keyspace?

If there’s even the faintest trace of something like that — it’s a potential vulnerability. Would love to hear thoughts from anyone who’s dug deep into libsecp256k1 or forks. 🔬🔓

Again — massive thanks for the insights. Really appreciate the depth! 🚀🧠💥
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 23/05/2025, 08:29:24 UTC
Would appreciate any advice or pointers to resources!


If your code does something like if (point_is_negative) { do_this } else { do_that }, the CPU’s branch predictor leaves breadcrumbs kinda like a trail of clues. And if your code goes if (y < 0) { do_something_faster() }, that’s a dead giveaway, no cap.

Modular inversion (for affine coordinates) is slow as hell (AF), and it’s often variable-time, which is a pain in the neck.

For beginners, the best possible version to check out is JLP SECP256K1. The secp256k1 API doesn’t hand out low-level methods, but this dude basically precomputes G points from 0..255 (both + and -) in batches of 512. I’m not 100% sure about the math behind it, but it’s gotta be faster. Cyclone (which uses his code) cranks out 42M keys/s on 12 threads and 7M keys/s on a single thread.

If you wanna see all the possible versions, peep AlexanderCurl’s GitHub. Dude’s got a solid collection.

Thanks a lot for your detailed answer!

I just want to make sure I understood correctly:

If you remove all timing protections and constant-time safeguards from the elliptic curve implementation, then by carefully measuring execution time you could reliably determine whether a given point is 𝑃 P or its negation −𝑃 −P, because certain conditional branches or operations behave differently depending on the sign of the point’s coordinates.

Is that correct?

So, if you feed any point as input and measure the execution time of the code, you could reliably tell whether that point is the positive  𝑃 P or the negative −𝑃 −P one?
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 23/05/2025, 07:14:17 UTC
Hey everyone,

I’m writing my own library for elliptic curve operations and want to understand if it’s possible to distinguish, based on timing measurements, whether the second point in an operation is negative or positive (e.g., comparing
3P−4P vs.
3P−(−4P)).

Specifically:

Is it feasible in a non-constant-time implementation to detect the sign of a point from timing differences?

What are the main factors that could cause timing variations in these operations (coordinate inversion, branching, etc.)?

What best practices exist for measuring and exploiting such timing differences?

Are there known vulnerabilities or examples from popular libraries related to this?

Would appreciate any advice or pointers to resources!

Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 23/05/2025, 04:11:20 UTC
Code:
[quote author=nikolayspb link=topic=1306983.msg65408692#msg65408692 date=1747968135]
[quote author=Nodemath link=topic=1306983.msg65405281#msg65405281 date=1747884772]
[quote author=Nodemath link=topic=1306983.msg65394991#msg65394991 date=1747633333]
import time
import hashlib
from coincurve import PrivateKey
from Crypto.Hash import RIPEMD160
import psutil
import os
import signal
import sys
import multiprocessing as mp
from bloom_filter2 import BloomFilter
from google.colab import drive

# Mount Google Drive
drive.mount('/content/drive')

# Config
File_NAME = "Puzzle 71.013.000.csv"
DRIVE_FOLDER = "/content/drive/MyDrive/Puzzle71"
file_path = f"{DRIVE_FOLDER}/{File_NAME}"
SCAN_RANGE = 100_000
TARGET_PREFIX = "f6f543"
BLOOM_CAPACITY = 1_000_000
BLOOM_ERROR_RATE = 0.001

# Load known H160 hashes into Bloom filter
KNOWN_H160S = [
    "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8",
]
bloom = BloomFilter(max_elements=BLOOM_CAPACITY, error_rate=BLOOM_ERROR_RATE)
for h in KNOWN_H160S:
    bloom.add(h)

# Read decimal numbers from CSV
with open(file_path, 'r') as f:
    lines = [line.strip() for line in f if line.strip()]
if lines[0].lower().startswith('value'):
    lines = lines[1:]
Decimal_numbers = [int(line) for line in lines]

def privatekey_to_h160(priv_key_int):
    try:
        priv = PrivateKey.from_int(priv_key_int)
        pubkey = priv.public_key.format(compressed=True)
        sha256 = hashlib.sha256(pubkey).digest()
        ripemd160 = RIPEMD160.new(sha256).digest()
        return ripemd160.hex()
    except Exception:
        return None

def optimize_performance():
    try:
        p = psutil.Process()
        if hasattr(p, "cpu_affinity"):
            p.cpu_affinity(list(range(os.cpu_count())))
        if hasattr(psutil, "REALTIME_PRIORITY_CLASS"):
            p.nice(psutil.REALTIME_PRIORITY_CLASS)
        else:
            p.nice(-20)
    except Exception as e:
        print(f"[!] Optimization warning: {e}")

def signal_handler(sig, frame):
    print("\n[!] Interrupted. Exiting.")
    sys.exit(0)

def check_key(k):
    h160 = privatekey_to_h160(k)
    if h160 and (h160.startswith(TARGET_PREFIX) or h160 in bloom):
        return ('match', k, h160)
    return ('progress', k, h160)

def process_decimal(decimal):
    start = max(1, decimal - SCAN_RANGE)
    end = decimal + SCAN_RANGE
    keys = list(range(start, end + 1))

    processed = 0
    start_time = time.time()
    last_key = None
    last_h160 = None

    ctx = mp.get_context("fork")
    with ctx.Pool(processes=os.cpu_count()) as pool:
        result_iter = pool.imap_unordered(check_key, keys, chunksize=1000)

        for result in result_iter:
            if result[0] == 'match':
                _, key, h160 = result
                print(f"\n**PREFIX MATCH FOUND!** Private key {hex(key)} produces Hash160: {h160}\n")
            elif result[0] == 'progress':
                _, key, h160 = result
                processed += 1
                last_key = key
                last_h160 = h160

    elapsed = time.time() - start_time
    speed = processed / elapsed if elapsed > 0 else 0
    print(f"\nHash160 of the last processed key {hex(last_key)} -> {last_h160}")
    print(f"[✓] Completed Decimal: {Decimal} - Processed {processed} keys in {elapsed:.2f}s (Speed: {speed:.2f} keys/sec)")

def main():
    signal.signal(signal.SIGINT, signal_handler)
    optimize_performance()

    print(f"\nLoaded {len(Decimal_numbers)} Decimal numbers.")
    print(f"Scanning ±{SCAN_RANGE} around each.\nTarget prefix: {TARGET_PREFIX}")
    print(f"Bloom filter contains {len(KNOWN_H160S)} known H160 hashes.\n")

    for decimal in decimal_numbers:
        process_decimal(decimal)

if __name__ == '__main__':
    main()


Help me out to use rotor cuda gpu modules able run on colab
Private key to hash 160
[/quote]

Who ever help me with this code able to run on gpu
I definitely give u 1 BTC
NEED ACHIEVE ATLEAST 250M Keys/sec
[/quote]
Below is the complete CUDA C implementation and Google Colab setup instructions, fully translated into English, so you can run it on Colab’s GPU (e.g. Tesla T4) and achieve well over 200 MH/s.

```cuda
// gpu_scan_secp256k1.cu
//
// GPU-based brute-forcer for secp256k1 → SHA256 → RIPEMD-160
// Looks for private keys whose RIPEMD-160(pubkey) begins with a given hex prefix.
// Designed to run on Google Colab GPUs (Tesla T4, V100, etc.)
//
// Compile with:
//   nvcc -O3 -arch=sm_70 gpu_scan_secp256k1.cu -o gpu_scan

#include <cstdio>
#include <cstdint>
#include <cstring>
#include <openssl/sha.h>
#include <openssl/ripemd.h>
#include <secp256k1.h>

#define TARGET_PREFIX   "f6f543"            // hex prefix to match
#define PREFIX_BYTES    (sizeof(TARGET_PREFIX)-1)

// Check if the first bytes of the 20-byte H160 match our target
__device__ inline bool check_prefix(const unsigned char *h160) {
    // Convert TARGET_PREFIX into raw bytes {0xf6,0xf5,0x43}
    static const unsigned char target[PREFIX_BYTES/2] = {0xf6, 0xf5, 0x43};
    for (int i = 0; i < PREFIX_BYTES/2; i++) {
        if (h160[i] != target[i]) return false;
    }
    return true;
}

// Compute RIPEMD-160(SHA256(pubkey33)) from a compressed public key
__device__
void pubkey_to_h160(const unsigned char *pub33, unsigned char out[20]) {
    unsigned char sha_digest[32];
    SHA256_CTX sha_ctx;
    SHA256_Init(&sha_ctx);
    SHA256_Update(&sha_ctx, pub33, 33);
    SHA256_Final(sha_digest, &sha_ctx);

    RIPEMD160_CTX ripemd_ctx;
    RIPEMD160_Init(&ripemd_ctx);
    RIPEMD160_Update(&ripemd_ctx, sha_digest, 32);
    RIPEMD160_Final(out, &ripemd_ctx);
}

// Each thread processes one private key: priv = base + threadIndex
__global__
void kernel_scan(uint64_t base, uint64_t total_keys) {
    uint64_t idx = blockIdx.x * blockDim.x + threadIdx.x;
    if (idx >= total_keys) return;

    uint64_t priv = base + idx;
    // Build a 32-byte big-endian seckey
    unsigned char seckey[32] = {0};
    for (int i = 0; i < 8; i++) {
        seckey[31 - i] = (priv >> (8*i)) & 0xFF;
    }

    // Generate public key on secp256k1 curve
    secp256k1_pubkey pub;
    secp256k1_ec_pubkey_create(nullptr, &pub, seckey);

    // Serialize compressed (33 bytes)
    unsigned char pub33[33];
    size_t outlen = 33;
    secp256k1_ec_pubkey_serialize(nullptr, pub33, &outlen, &pub, SECP256K1_EC_COMPRESSED);

    // Compute RIPEMD-160(SHA256(pub33))
    unsigned char h160[20];
    pubkey_to_h160(pub33, h160);

    // Check prefix
    if (check_prefix(h160)) {
        // Print matching private key and first 3 bytes of hash
        printf("[MATCH] priv=0x%016llx  h160=%.2x%.2x%.2x...\n",
               (unsigned long long)priv,
               h160[0], h160[1], h160[2]);
    }
}

int main(int argc, char** argv) {
    if (argc != 4) {
        printf("Usage: gpu_scan <start_decimal> <num_keys> <threads_per_block>\n");
        return 1;
    }

    uint64_t start      = strtoull(argv[1], nullptr, 0);
    uint64_t num_keys   = strtoull(argv[2], nullptr, 0);
    int threadsPerBlock = atoi(argv[3]);

    // Initialize libsecp256k1 context once on the host
    secp256k1_context* ctx = secp256k1_context_create(SECP256K1_CONTEXT_SIGN);
    // (We pass nullptr into the kernel; libsecp256k1 allows this as context is unused on device.)

    uint64_t numBlocks = (num_keys + threadsPerBlock - 1) / threadsPerBlock;
    printf("Launching %llu blocks × %d threads = %llu keys…\n",
           (unsigned long long)numBlocks,
           threadsPerBlock,
           (unsigned long long)num_keys);

    // Measure time with CUDA events
    cudaEvent_t t_start, t_end;
    cudaEventCreate(&t_start);
    cudaEventCreate(&t_end);
    cudaEventRecord(t_start);

    // Launch the kernel
    kernel_scan<<<numBlocks, threadsPerBlock>>>(start, num_keys);
    cudaDeviceSynchronize();

    cudaEventRecord(t_end);
    cudaEventSynchronize(t_end);
    float ms = 0.0f;
    cudaEventElapsedTime(&ms, t_start, t_end);

    double seconds = ms / 1000.0;
    double mhps    = (double)num_keys / seconds / 1e6;
    printf("\nScanned %llu keys in %.2f s → %.2f MH/s\n",
           (unsigned long long)num_keys,
           seconds,
           mhps);

    secp256k1_context_destroy(ctx);
    return 0;
}
```

### How to run this in Google Colab

1. **Install dependencies and build libsecp256k1**

   ```bash
   !apt-get update && apt-get install -y libssl-dev
   !git clone https://github.com/bitcoin-core/secp256k1.git
   %cd secp256k1
   !./autogen.sh
   !./configure --enable-module-ecdh --enable-module-recovery --enable-experimental --with-bignum=no
   !make -j$(nproc)
   %cd ..
   ```

2. **Upload the CUDA source**
   Either use Colab’s file browser to upload `gpu_scan_secp256k1.cu`, or embed it with:

   ```bash
   %%bash
   cat > gpu_scan_secp256k1.cu << 'EOF'
   [paste the full CUDA code here]
   EOF
   ```

3. **Compile with NVCC**

   ```bash
   !nvcc gpu_scan_secp256k1.cu -o gpu_scan \
     -I./secp256k1/include -L./secp256k1/.libs -lsecp256k1 \
     -lssl -lcrypto -O3 -arch=sm_70
   ```

4. **Run the GPU scanner**

   ```bash
   # Usage: ./gpu_scan <start_decimal> <num_keys> <threadsPerBlock>
   # For your ±SCAN_RANGE window:
   start_decimal=1000000000000   # replace with your base decimal
   total_keys=$((2*100000+1))    # e.g. SCAN_RANGE=100000
   threads_per_block=256

   !./gpu_scan $start_decimal $total_keys $threads_per_block
   ```

> **Expected performance on Tesla T4 (Colab GPU): \~600–800 MH/s**, comfortably above 200 MH/s.


[/quote]



This code doesn’t actually use the GPU properly — it essentially works like a regular CPU program pretending to be CUDA magic. Let’s break this down atom by atom:

💣 What's wrong here:
1. libsecp256k1 runs on the CPU
cpp
Копировать
Редактировать
secp256k1_ec_pubkey_create(nullptr, &pub, seckey);
→ nullptr is passed instead of a context because secp256k1_context isn't available on the GPU.
This means: the function call simply doesn't happen inside the GPU kernel. And even if it could — it would be invalid.

2. OpenSSL (SHA256 and RIPEMD160) are CPU-only
cpp
Копировать
Редактировать
SHA256_Init(&sha_ctx);
RIPEMD160_Init(&ripemd_ctx);
→ These functions don't compile under CUDA. If you put them inside a __device__ function, you're tricking nvcc. Most likely:

the code won't compile,

or it compiles but runs entirely on the CPU.

3. What actually runs on the GPU? Basically… nothing.
→ The CUDA kernel:

cpp
Копировать
Редактировать
__global__ void kernel_scan(...) { ... }
calls CPU-only functions (which is impossible on GPU), so what really happens is:

either nothing runs at all,

or everything silently runs on the CPU,

or the compiler fakes compilation but the kernel doesn’t execute meaningfully.

📉 Result:
Even if you compile this with nvcc, and it appears to "work" — all the logic is CPU-based, and kernel_scan is just a wrapper, doing nothing GPU-related.

🔧 How to build real CUDA magic:
Use a secp256k1 implementation that works on CUDA — there are some forks out there (with hacks).

Replace OpenSSL with CUDA-native SHA256/RIPEMD160, such as:

cuSHA256 (available as ready-to-use kernels),

your own implementation of H160 (a bit painful, but doable).

Keep everything inside the GPU — generate private keys, compute public keys, hash them — all directly on device.

на инг


What exact speed do you want to achieve from Colab?


P.S Bitcoin isn’t just code — it’s faith that in this chaos of the digital universe, there’s meaning. The world stands on trust, just like we cling to the last pixels of light in eternal darkness. If you believe in me, support with even a satoshi — it’s like sparking a tiny flame into the abyss.

For those who aren’t afraid to dive into the glitch:
1BsEVzowVsyvAs8vTVeG4o6Xi3bg315795

Let’s fuel this digital legend together — even 1 satoshi is already an act of a messiah. 🚀🖤








Post
Topic
Board Development & Technical Discussion
Re: Cuda scripts for point addition , multiplication etc
by
Valera909
on 23/05/2025, 03:49:36 UTC
This code doesn’t actually use the GPU properly — it essentially works like a regular CPU program pretending to be CUDA magic. Let’s break this down atom by atom:

💣 What's wrong here:
1. libsecp256k1 runs on the CPU
cpp
Копировать
Редактировать
secp256k1_ec_pubkey_create(nullptr, &pub, seckey);
→ nullptr is passed instead of a context because secp256k1_context isn't available on the GPU.
This means: the function call simply doesn't happen inside the GPU kernel. And even if it could — it would be invalid.

2. OpenSSL (SHA256 and RIPEMD160) are CPU-only
cpp
Копировать
Редактировать
SHA256_Init(&sha_ctx);
RIPEMD160_Init(&ripemd_ctx);
→ These functions don't compile under CUDA. If you put them inside a __device__ function, you're tricking nvcc. Most likely:

the code won't compile,

or it compiles but runs entirely on the CPU.

3. What actually runs on the GPU? Basically… nothing.
→ The CUDA kernel:

cpp
Копировать
Редактировать
__global__ void kernel_scan(...) { ... }
calls CPU-only functions (which is impossible on GPU), so what really happens is:

either nothing runs at all,

or everything silently runs on the CPU,

or the compiler fakes compilation but the kernel doesn’t execute meaningfully.

📉 Result:
Even if you compile this with nvcc, and it appears to "work" — all the logic is CPU-based, and kernel_scan is just a wrapper, doing nothing GPU-related.

🔧 How to build real CUDA magic:
Use a secp256k1 implementation that works on CUDA — there are some forks out there (with hacks).

Replace OpenSSL with CUDA-native SHA256/RIPEMD160, such as:

cuSHA256 (available as ready-to-use kernels),

your own implementation of H160 (a bit painful, but doable).

Keep everything inside the GPU — generate private keys, compute public keys, hash them — all directly on device.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 23/05/2025, 02:01:59 UTC
For your Google collab try that


!pip install coincurve pycryptodome bloom-filter2




import time
import hashlib
from coincurve import PrivateKey
from Crypto.Hash import RIPEMD160
import os
import sys
from bloom_filter2 import BloomFilter
from google.colab import drive
import signal

# Mount Google Drive
drive.mount('/content/drive')

# Config
FILE_NAME = "Puzzle 71.013.000.csv"
DRIVE_FOLDER = "/content/drive/MyDrive/Puzzle71"
file_path = f"{DRIVE_FOLDER}/{FILE_NAME}"
SCAN_RANGE = 100_000
TARGET_PREFIX = "f6f543"
BLOOM_CAPACITY = 1_000_000
BLOOM_ERROR_RATE = 0.001

# Load known H160 hashes into Bloom filter
KNOWN_H160S = [
    "f6f5431d25bbf7b12e8add9af5e3475c44a0a5b8",
]
bloom = BloomFilter(max_elements=BLOOM_CAPACITY, error_rate=BLOOM_ERROR_RATE)
for h in KNOWN_H160S:
    bloom.add(h)

# Read decimal numbers from CSV
with open(file_path, 'r') as f:
    lines = [line.strip() for line in f if line.strip()]
if lines[0].lower().startswith('value'):
    lines = lines[1:]
decimal_numbers = [int(line) for line in lines]

def privatekey_to_h160(priv_key_int):
    try:
        priv = PrivateKey.from_int(priv_key_int)
        pubkey = priv.public_key.format(compressed=True)
        sha256 = hashlib.sha256(pubkey).digest()
        ripemd160 = RIPEMD160.new(sha256).digest()
        return ripemd160.hex()
    except Exception:
        return None

def signal_handler(sig, frame):
    print("\n[!] Interrupted. Exiting.")
    sys.exit(0)

def check_key(k):
    h160 = privatekey_to_h160(k)
    if h160 and (h160.startswith(TARGET_PREFIX) or h160 in bloom):
        return ('match', k, h160)
    return ('progress', k, h160)

def process_decimal(decimal):
    import multiprocessing as mp

    start = max(1, decimal - SCAN_RANGE)
    end = decimal + SCAN_RANGE
    keys = list(range(start, end + 1))

    processed = 0
    start_time = time.time()
    last_key = None
    last_h160 = None

    ctx = mp.get_context("spawn")  # Colab-safe
    with ctx.Pool(processes=os.cpu_count()) as pool:
        result_iter = pool.imap_unordered(check_key, keys, chunksize=1000)

        for result in result_iter:
            if result[0] == 'match':
                _, key, h160 = result
                print(f"\n**PREFIX MATCH FOUND!** Private key {hex(key)} produces Hash160: {h160}\n")
            elif result[0] == 'progress':
                _, key, h160 = result
                processed += 1
                last_key = key
                last_h160 = h160

    elapsed = time.time() - start_time
    speed = processed / elapsed if elapsed > 0 else 0
    print(f"\nHash160 of the last processed key {hex(last_key)} -> {last_h160}")
    print(f"[✓] Completed Decimal: {decimal} - Processed {processed} keys in {elapsed:.2f}s (Speed: {speed:.2f} keys/sec)")

def main():
    signal.signal(signal.SIGINT, signal_handler)

    print(f"\nLoaded {len(decimal_numbers)} Decimal numbers.")
    print(f"Scanning ±{SCAN_RANGE} around each.\nTarget prefix: {TARGET_PREFIX}")
    print(f"Bloom filter contains {len(KNOWN_H160S)} known H160 hashes.\n")


Try that


In Colab, the CPU does the heavy lifting by default—Tesla T4 GPU remains mostly idle because bigint cryptography operations are rarely GPU-accelerated.

However, using coincurve combined with multiprocessing, Colab can squeeze maximum performance out of the CPU (usually 2 to 4 cores).
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 22/05/2025, 23:35:14 UTC
Alright, diving into the digital abyss. 🖤🌀

If we imagine having a “superprocessor” capable of 70 Peta-hashes per second (70 × 10^15 hashes/sec), the question is — how fast can we generate the input data for those hashes?

Factor 1: Input data size
The hashed input is usually a private key (say, 256 bits = 32 bytes).

To feed 70 Peta-hashes per second, you need to supply 70 × 10^15 × 32 bytes per second.

That’s 2.24 × 10^18 bytes/sec, or about 2.2 exabytes per second of input data.

Factor 2: System bandwidth
Modern top-tier server memory (DDR5, HBM) can deliver hundreds of gigabytes per second, maybe up to 1–2 terabytes per second on super-systems.

But 2.2 exabytes per second is orders of magnitude beyond any existing hardware.

Factor 3: Data generation speed
Even if you just have a simple counter generating private keys, updating registers, computing, storing — it takes time and resources.

CPUs or FPGAs can generate tens of billions of keys per second (10^10), but 10^15–10^17? — physically impossible with current architectures.

Summary
Parameter   Value
Claimed hashing speed   70 Peta-hashes/sec (7×10^16)
Data size per hash   32 bytes
Required data throughput   ~2.2 exabytes per second (2.2×10^18 bytes)
Realistic memory bandwidth   ~1–2 terabytes per second
Max data generation speed   ~10^10 – 10^11 keys per second

Conclusion:
No existing system can supply input data at a speed anywhere near 70 Peta-hashes per second. This is pure techno-fantasy, ripped from the world of bytes and reality.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 22/05/2025, 23:25:38 UTC
📍 After running your 13 iterations:
You ended up with only 13 unique delta prefixes, meaning just 13 realistic options for your original N₀.

📉 Reduction:
100000
13

7692.31
13
100000

 ≈7692.31
✅ You reduced the brute-force space by around 7,692 times — or cut 99.987% of the junk. That’s insanely efficient.



I get that you’re probably reading this standing up, maybe even ready to give a round of applause or a standing ovation — but hold up, folks, no need for that. This is just some good old-fashioned math magic, nothing more.

Honestly, it’s like applauding a toaster for making toast. Sure, it’s impressive, but let’s keep it real — it’s just numbers doing their thing. 😉✨








Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 22/05/2025, 10:05:20 UTC
Who ever help me with this code able to run on gpu
I definitely give u 1 BTC
NEED ACHIEVE ATLEAST 250M Keys/sec

I don’t see anything in this script that @FixedPaul’s VanitySearch fork cannot do.
It’s using GPU and gives 6.8G keys/sec

based on MCD script
For most cases, Prefix has better average performance.

I can only encourage you to read the dedicated post and it’s conclusions.

⚠️ Google Colab is limited
Standard Colab won't give you access to A100 or V100 GPUs by default.

You’ll need Colab Pro+ or your own server with an NVIDIA GPU.

Also, you’ll have to build and run C++ CUDA code, which requires some tricky steps in Colab (mounting Drive, manual compilation, custom execution commands).
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 20/05/2025, 06:29:47 UTC
Does anyone have a public key that corresponds to a 30-character private key?

sure, here you go

Private key (30 characters): 0123456789abcdef0123456789abcd
Public key associated        :  02b17074952d370c7b69ced0290fe82bf92321c9b9d8d7691f6aff716982bf4bc5

   

I'm looking for something like the Bitcoin Puzzle – an address with a balance, but where the private key is in decimal format and has at most 30 digits.
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 20/05/2025, 05:35:53 UTC
Does anyone have a public key that corresponds to a 30-character private key?
Post
Topic
Board Bitcoin Discussion
Re: Bitcoin puzzle transaction ~32 BTC prize to who solves it
by
Valera909
on 20/05/2025, 05:28:27 UTC
If you have a public key, even a 30-character one, it might be possible to solve it within a day.