Search content
Sort by

Showing 14 of 14 results by ThewsyRum
Post
Topic
Board Development & Technical Discussion
Re: Is it possible to solve the ECDLP in this situation?
by
ThewsyRum
on 21/02/2025, 22:29:59 UTC
This is impossible. An isogeny between prime-order curves requires a trivial kernel or a kernel of size equal to the prime order, which would collapse the curve into a trivial group. E2 and E2 have distinct J-invariants, so no non-trivial isogeny can exist. It is mathematically impossible
Post
Topic
Board Development & Technical Discussion
Re: RSZ short S same used on 2 addresses .. ?
by
ThewsyRum
on 16/01/2025, 15:10:23 UTC
No, bro, you can only recover the private key if the repeated R value is from the same address. Why on earth would different addresses share any kind of relationship like that? Smiley
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: BTC Puzzle Question
by
ThewsyRum
on 14/01/2025, 20:02:25 UTC
⭐ Merited by ABCbits (1)
Post
Topic
Board Development & Technical Discussion
Re: What are the flaws in this method of generating random numbers.
by
ThewsyRum
on 30/12/2024, 16:21:55 UTC
The entropy is low; using only the last two numeric digits provides only 100 possible outcomes. This limited range increases the likelihood of participants predicting the result, especially on a large scale
Post
Topic
Board Development & Technical Discussion
Re: Multibit HD YAML Files with encryptedPassword and encryptedBackupKey
by
ThewsyRum
on 26/12/2024, 18:23:49 UTC
Test password variations; as you mentioned a possible issue with UTF-16, try removing or adding trailing spaces, test versions with and without uppercase letters, and if your password contains special characters, convert them to their ASCII forms
Perform brute force; extract the base64 data from the "encryptedPassword" and use a script to test various password combinations. If you're able to decrypt this block, then that's the correct password
You can also perform brute force directly on the ".wallet" file. Since you have "continuous backups" and ZIP backups, you have multiple snapshots to test — sometimes, an older file may not have the same corruption as a later version

That's all I know so far; I hope I can help you
Post
Topic
Board Development & Technical Discussion
Re: Why is Kangaroo slow but produces results faster?
by
ThewsyRum
on 13/12/2024, 19:43:16 UTC
I'll give you a better answer, my dear friend. At first, I had the same doubt, and back then, I pretended to understand the topic, but in reality, I couldn't explain the difference, for example. First, you need to know that these are two different things. VanitySearch uses normal brute force (it tests every combination until it finds the answer = cracks the private key). On the other hand, Kangaroo is an algorithm used to crack private keys when the corresponding public key is known. Kangaroo uses "mathematical techniques" to exploit this, instead of testing combination by combination.

Let me create an analogy: Imagine you're in a forest looking for a specific tree, but the size of this forest is very large, enormous, gigantic, and you have two strategies to find this tree: 

-- You could choose to walk randomly through the forest, checking every tree you encounter until you find the specific one. This approach has no specific direction and relies heavily on luck to find the right tree. You could even be the fastest person in the world, but in such a vast forest, this could take years because you're trying all possibilities without an optimized plan (VanitySearch). 

-- Now imagine you have a map of the forest with predefined trails that increase the chances of finding that tree. The "kangaroo" jumps between strategic points along these intelligent trails, significantly reducing the time needed to reach the desired tree. Even though each jump may be slower compared to VanitySearch, this optimized strategy allows you to find the specific tree much faster by taking advantage of the mathematical algorithm created by a super-intelligent guy named John M. Pollard (Pollard's Kangaroo)
Post
Topic
Board Development & Technical Discussion
Re: "Fixing 24-Word Mnemonic Support in bip39-solver-gpu"
by
ThewsyRum
on 12/12/2024, 19:39:08 UTC

I read what mcdouglasx said and took it as a challenge for myself. Take a good look at what I did: The ipad_key and opad_key arrays with 128 bytes each to perform HMAC operations. When the mnemonic_length exceeds 128 bytes (24 words), the remainder of the key is not being processed correctly. Therefore, it would be necessary to implement a key normalization process that complies with HMAC. If the key is larger than the block size (128 bytes for SHA-512), it must first be hashed to reduce its size. Then, it should be padded with zeros to reach 128 bytes if needed.

The changes I made to the file int_to_address.cl look like this:
Code:
__kernel void int_to_address(ulong mnemonic_start_hi, ulong mnemonic_start_lo, __global uchar * target_mnemonic, __global uchar * found_mnemonic) {
    ulong idx = get_global_id(0);

    ulong mnemonic_lo = mnemonic_start_lo + idx;
    ulong mnemonic_hi = mnemonic_start_hi;

    // ... [existing code to construct 'bytes' and 'mnemonic']

    // Constructing the mnemonic
    uchar mnemonic[180] = {0};
    uchar mnemonic_length = 11;
    for(int i = 0; i < 12; i++) {
        int word_index = indices[i];
        int word_length = word_lengths[word_index];
        mnemonic_length += word_lengths[word_index];
    }

    int mnemonic_index = 0;
    for (int i = 0; i < 12; i++) {
        int word_index = indices[i];
        int word_length = word_lengths[word_index];
       
        for(int j = 0; j < word_length; j++) {
            mnemonic[mnemonic_index] = words[word_index][j];
            mnemonic_index++;
        }
        mnemonic[mnemonic_index] = 32; // Space
        mnemonic_index++;
    }
    mnemonic[mnemonic_index - 1] = 0; // Null termination

    // Key Normalization
    uchar normalized_key[128] = {0};
    if (mnemonic_length > 128) {
        // If the mnemonic is larger than 128 bytes, hash it with SHA-512
        sha512(&mnemonic, mnemonic_length, &normalized_key);
        // Remaining bytes are already zero-padded (done at initialization)
    } else {
        // If the mnemonic is less than or equal to 128 bytes, copy it directly
        for(int i = 0; i < mnemonic_length; i++) {
            normalized_key[i] = mnemonic[i];
        }
        // Remaining bytes are already zero-padded
    }

    // Initialization of ipad_key and opad_key
    uchar ipad_key[128];
    uchar opad_key[128];
    for(int x = 0; x < 128; x++) {
        ipad_key[x] = 0x36;
        opad_key[x] = 0x5c;
    }

    // Apply XOR with the normalized key
    for(int x = 0; x < 128; x++) {
        ipad_key[x] ^= normalized_key[x];
        opad_key[x] ^= normalized_key[x];
    }

    // Continue seed derivation process
    uchar seed[64] = {0};
    uchar sha512_result[64] = {0};
    uchar key_previous_concat[256] = {0};
    uchar salt[12] = {109, 110, 101, 109, 111, 110, 105, 99, 0, 0, 0, 1};
    for(int x = 0; x < 128; x++) {
        key_previous_concat[x] = ipad_key[x];
    }
    for(int x = 0; x < 12; x++) {
        key_previous_concat[x + 128] = salt[x];
    }

    sha512(&key_previous_concat, 140, &sha512_result);
    copy_pad_previous(&opad_key, &sha512_result, &key_previous_concat);
    sha512(&key_previous_concat, 192, &sha512_result);
    xor_seed_with_round(&seed, &sha512_result);

    for(int x = 1; x < 2048; x++) {
        copy_pad_previous(&ipad_key, &sha512_result, &key_previous_concat);
        sha512(&key_previous_concat, 192, &sha512_result);
        copy_pad_previous(&opad_key, &sha512_result, &key_previous_concat);
        sha512(&key_previous_concat, 192, &sha512_result);
        xor_seed_with_round(&seed, &sha512_result);
    }

    // ... [existing code for key generation and address verification]

    if(found_target == 1) {
        found_mnemonic[0] = 0x01;
        for(int i = 0; i < mnemonic_index; i++) {
            target_mnemonic[i] = mnemonic[i];
        }
    }
}

And in the file just_seed.cl, it turned out like this:
Code:
__kernel void just_seed(ulong mnemonic_start_hi, ulong mnemonic_start_lo, __global uchar * target_mnemonic, __global uchar * found_mnemonic) {
    ulong idx = get_global_id(0);

    ulong seed_start = idx * 64;
    ulong mnemonic_lo = mnemonic_start_lo + idx;
    ulong mnemonic_hi = mnemonic_start_hi;

    // ... [existing code to build 'bytes' and 'mnemonic']

    // Mnemonic construction
    uchar mnemonic[180];
    int mnemonic_index = 0;
    for (int i = 0; i < 12; i++) {
        int word_index = indices[i];
        int word_length = word_lengths[word_index];
       
        for(int j = 0; j < word_length; j++) {
            mnemonic[mnemonic_index] = words[word_index][j];
            mnemonic_index++;
        }
        mnemonic[mnemonic_index] = 32; // Space
        mnemonic_index++;
    }
    mnemonic[mnemonic_index - 1] = 0; // Null termination

    uchar mnemonic_length = 11 + word_lengths[indices[0]] + word_lengths[indices[1]] + word_lengths[indices[2]] + word_lengths[indices[3]] + word_lengths[indices[4]] + word_lengths[indices[5]] + word_lengths[indices[6]] + word_lengths[indices[7]] + word_lengths[indices[8]] + word_lengths[indices[9]] + word_lengths[indices[10]] + word_lengths[indices[11]];

    // Key normalization
    uchar normalized_key[128] = {0};
    if (mnemonic_length > 128) {
        // If the mnemonic is larger than 128 bytes, hash with SHA-512
        sha512(&mnemonic, mnemonic_length, &normalized_key);
        // Fill remaining bytes with zeros (already done during initialization)
    } else {
        // If the mnemonic is 128 bytes or less, copy directly
        for(int i = 0; i < mnemonic_length; i++) {
            normalized_key[i] = mnemonic[i];
        }
        // Remaining bytes are already filled with zeros
    }

    // Initialization of ipad_key and opad_key
    uchar ipad_key[128];
    uchar opad_key[128];
    for(int x = 0; x < 128; x++) {
        ipad_key[x] = 0x36;
        opad_key[x] = 0x5c;
    }

    // Apply XOR with normalized key
    for(int x = 0; x < 128; x++) {
        ipad_key[x] ^= normalized_key[x];
        opad_key[x] ^= normalized_key[x];
    }

    // Continue the seed derivation process
    uchar seed[64] = { 0 };
    uchar sha512_result[64] = { 0 };
    uchar key_previous_concat[256] = { 0 };
    uchar salt[12] = { 109, 110, 101, 109, 111, 110, 105, 99, 0, 0, 0, 1 };
    for(int x = 0; x < 128; x++) {
        key_previous_concat[x] = ipad_key[x];
    }
    for(int x = 0; x < 12; x++) {
        key_previous_concat[x + 128] = salt[x];
    }

    sha512(&key_previous_concat, 140, &sha512_result);
    copy_pad_previous(&opad_key, &sha512_result, &key_previous_concat);
    sha512(&key_previous_concat, 192, &sha512_result);
    xor_seed_with_round(&seed, &sha512_result);

    for(int x = 1; x < 2048; x++) {
        copy_pad_previous(&ipad_key, &sha512_result, &key_previous_concat);
        sha512(&key_previous_concat, 192, &sha512_result);
        copy_pad_previous(&opad_key, &sha512_result, &key_previous_concat);
        sha512(&key_previous_concat, 192, &sha512_result);
        xor_seed_with_round(&seed, &sha512_result);
    }
}

This was my work, test it to see if everything is running fine and let me know


Post
Topic
Board Development & Technical Discussion
Re: MasterChain Soft :: Click and play hosting of nodes from blockheight X
by
ThewsyRum
on 12/12/2024, 17:37:23 UTC
Interesting...
Post
Topic
Board Development & Technical Discussion
Re: How do timelocks come into play for PTLCs?
by
ThewsyRum
on 10/12/2024, 18:05:48 UTC
No problem, happy to help! Good luck with the swap!  Smiley
Post
Topic
Board Development & Technical Discussion
Re: How do timelocks come into play for PTLCs?
by
ThewsyRum
on 09/12/2024, 12:35:30 UTC
Observe these 3 characteristics: 
- UTXOs 
- Fund destinations 
- Time conditions (timeouts) 

Bob's signature cannot be reused for another transaction that changes any of these parameters. If Alice tries to create a second transaction using the same UTXOs, Bob's signature will not be valid for this new transaction

As I mentioned above, PTLCs often combine adaptor signatures with multi-party signatures, so: 
- Alice deposits her funds into a multi-sig address that requires collaboration between Alice and Bob to spend the funds
- Bob holds a signature that is only valid for the specific transaction claiming the funds according to the swap terms (by presenting the secret s)

    Refund Mechanism 
As also mentioned earlier, in addition to swap transactions, there is a refund path that Bob can use in case Alice disappears or does not complete the swap: 
- Bob does not need Alice's tweak t to execute the refund
- After the timeout tB, he can sign the refund transaction using his own key, independently of Alice

Consider the following scenario: 
Bob signs a specific transaction that can only be executed with Alice's secret s, and Alice cannot sign additional transactions spending the same UTXOs without Bob's collaboration
Thus, Alice cannot simply spend the UTXOs in another way without invalidating Bob's signature. If she attempts a double spend, the original transaction with Bob's signature will still be valid, and he will be able to execute the refund after the timeout

After this magnificent explanation, do I deserve a donation? Cheesy
Post
Topic
Board Development & Technical Discussion
Re: How do timelocks come into play for PTLCs?
by
ThewsyRum
on 07/12/2024, 20:20:21 UTC
You are ignoring a very important part, which is that PTLCs typically combine adaptor signatures with multi-party signatures (like MultiSig)

"Adaptor signatures alone often cannot fully guarantee a contract... This issue is usually resolved by combining adaptor signatures with multi-party signatures. For example, Alice deposits her funds into an address that can only be spent if she and Bob collaborate to create a valid signature" 

So, in your example, when Alice disappears, Bob does not need Alice's tweak \(t\) to execute his refund. After the time \(t_B\), Bob can use a separate pre-agreed refund transaction path that does not require completing the adaptor signature. This refund mechanism would be built into the initial setup of the swap, similar to how HTLCs handle it

This is why documentation notes that adaptor signatures typically require "a time-locked refund option in case one party refuses to sign." The refund path is separate from the adaptor signature path used for successful execution of the swap
Post
Topic
Board Development & Technical Discussion
Re: How do timelocks come into play for PTLCs?
by
ThewsyRum
on 07/12/2024, 11:40:41 UTC
Quote
would we just add the timelock into the transaction message m in each adaptor signature?
Yes, the timelocks are incorporated into the transactions similarly to HTLCs. Specifically, the timelock can be included in the transaction data or in the adaptor signatures themselves to ensure that the time conditions are respected. By integrating the timelock into the transaction message m, you ensure that the time constraints become an integral part of the swap agreement, allowing the involved parties to perform claim or refund actions

Quote
If so, is anything different from how it is in HTLCs, or is all we need still tA > tB?
The condition that tA is greater than tB remains in place to ensure the security of the atomic swap. This relationship prevents scenarios where one party could exploit the system to improperly obtain funds. Therefore, it is still necessary to maintain the hierarchy of timeouts to ensure that, in the event of a failure or aborting of the swap, each party can safely perform a refund. There are no fundamental differences regarding tA > tB when comparing PTLCs with HTLCs
Post
Topic
Board Development & Technical Discussion
Topic OP
The truth about: Twist Attack Sub Group Attack
by
ThewsyRum
on 17/11/2024, 20:37:40 UTC
I'm here again. I noticed a group of people still determined to somehow break Bitcoin wallets using this attack, especially the one implemented by
https://github.com/KrashKrash/Twist-Attack-Sub-Group-Attack

 First, how would this attack work?
 Partial Private Key Collection: 
- Calculate small subgroups of each twist curve and attempt to compute the discrete logarithm within these subgroups to obtain partial private keys. 

  Coprime Verification:
- After collecting all partial private keys, verify if the subgroup orders are pairwise coprime, which is a requirement for applying the CRT (Chinese Remainder Theorem). 

  Combining Partial Keys Using CRT:
- If the subgroup orders are pairwise coprime, calculate the combined modulus by multiplying all subgroup orders. 
- Use CRT to combine the partial private keys modulo their respective subgroup orders to obtain the private key modulo the combined modulus. 
- If the combined modulus is greater than the order n of the secp256k1 curve, the private key modulo n can be recovered. 

 Alright, but in practice, this doesn't work.
Why? Because the combined modulus is, in most cases (if not all), much smaller than n, making recovery impossible. 

  Second point:
The cofactors of the secp256k1 curve are 1. This means there are no small-order subgroups within the curve that could be exploited. Additionally, the twist of secp256k1 also has a large prime order, similar to the original curve. 

  Third point:
Bitcoin implementations, such as the "libsecp256k1" library, perform checks to ensure that the points used are indeed on the curve. This prevents you from, for example, using points belonging to the twist or off-curve points to try and extract private key information. 

  In summary, these characteristics make the attack unfeasible.


If I made any mistakes in my explanation or spelling (English is not my native language), feel free to correct me.
Post
Topic
Board Development & Technical Discussion
Topic OP
Something different
by
ThewsyRum
on 16/11/2024, 14:33:14 UTC
Hey man, what's up? I finally decided to create an account on this forum Smiley I don't know, sometimes I'm lazy when it comes to simple things.  Grin

Lately, I've been pretty motivated, studying various topics at the same time. Yeah... I realized there are a lot of brilliant minds here, but not everyone has "the gift" of putting things into practice. "Some people excel at planning, while others shine in execution."

My goal would be to contribute by coding a new algorithm to solve the ECDLP, or perhaps modify an existing one, or even, who knows, come up with a new attack or something along those lines. Any thoughts?