Search content
Sort by

Showing 20 of 305 results by stwenhao
Post
Topic
Board Reputation
Re: Goodbye, world!
by
stwenhao
on 28/06/2025, 14:33:24 UTC
Quote
ended up with only 8 "nibbles" (0's) in the hash/txid, instead of 9 like the original
Change endianness. It is quite likely, that the author wanted to get nine nibbles, but mined it with the original SHA-256 endianness (because it is faster), and then, it was too late to change it after broadcasting.
Code:
020000000aeef5403b5a79a385f4d5e80892a8bf62c8df8164a53ba2721cf798f416f8e6420100000000ffffffffce373fe5477f71ead1005c614d68927fbb529177118a4074fc0cb0b7c61252040100000000ffffffff4ccc82f32ab13b60a5b2f7a351d181247ed818a7fa278ef40e14952badf965170100000000ffffffff794c76ada87a21cf785c9bcf7c9f7cb08c51563b64a978302a242e86371310490000000000ffffffff8264028d373459955feac89eee60e873b321c3c249324e34e34b9b56e10b024d4400000000ffffffff8a372957c34f4c59e55ea27f089263e7b73a2e97597e6a92a53afcff850fbb5f0000000000ffffffff5760ec0888523ed84cefc32a677accc506597728ac487393abab2e884a73e66c0000000000ffffffff0e17bd1d8dfa251a3237d4be47fe3aec374150ecb45523f9e2df0621ce6127780500000000ffffffffcca5da4b1265bfa48b4ef1115f6a4ef9781bfa28d476097c23cf572c1e0f31c00600000000ffffffff2e6c3ff49cc037c2174f8f2d56db3a97c91644332e3d6186efb760e4969eacf40100000000ffffffff0274b10900000000001600145ab6db66fb0dd08ecb238f07141ffea56189552d06c00c00000000001976a91494d33676a61a1d9611468dd573e588999bb642e188ac2d4868a5
48fed127b2f57f3ea90402db257d3c0a4638e282adebea8dab6b58e5fcf10c25
6cb0fade00d4a32bbeb70fb5d83e877097b0377d2b2c8595ce7e74c000000000
See? Nine nibbles, which are turned into eight, when you change endianness after mining it.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Bitcoin Core October Upgrade
by
stwenhao
on 27/06/2025, 12:41:31 UTC
⭐ Merited by vapourminer (1)
Quote
you have already made the presumption that on-chain spam will be constant, and that fees will be constantly high because of that
Yes. Because in the long-term scenario, if you have to share the full chain history from 2009, up to today, instead of sharing a cryptographic proof, that all of such history is correct, then you discourage many people from joining the network in the first place, and from making new full nodes. Which means, that if the chain will grow over time, then if our ability to synchronize the chain wouldn't grow accordingly, then it would take more and more time, to sync the full chain. And that can affect decentralization, because that approach doesn't scale.

Quote
Fees will NEVER be constantly high because users will be priced out at some point, and many of them will wait for a lower, more favorable price.
On-chain fees should be constantly high in the future, because otherwise, miners wouldn't have enough block reward to have enough incentive to keep mining the chain. As Paul Sztorc wrote in his Sidechain Vision, "Fee Revenues must rise". In practice, it doesn't mean, that the on-chain fee should be paid by a single user. You can have thousands, millions, or even more users, making their transactions on second layers, having all of that batched, and finally pushed on-chain, as a single transaction with high fee rate. If you want to scale seriously, you won't onboard everyone on-chain. Which means, that you will have a lot of users, making transactions only inside LN, only inside sidechains, or only inside other kinds of subnetworks, and all of that traffic should be batched, and confirmed on-chain. In this way, you can have thousands of users, paying a bunch of millisatoshis each, and have a single on-chain transaction, paying a bunch of satoshis, signed by all of them, and covering all of their needs in just a few kilobytes per batched transaction.

Quote
Check the current state of the mempool.
The current state of the mempool can show you, that a lot of traffic moved outside Bitcoin, which is why it could be bad in the long-term. Many people stopped using Bitcoin, just because of scaling issues. And if they won't be solved, then you will have only whales transacting on-chain, and all users will use centralized solutions, provided by those whales. And that's why decentralized second layers are needed, to not lock users into exchange-like or bank-like systems, where they have to trust, instead of verifying.
Post
Topic
Board Bitcoin Discussion
Re: Will Bitcoin survive if all miners stop mining Bitcoin?
by
stwenhao
on 27/06/2025, 04:25:16 UTC
Quote
Bitcoin miners use a complex algorithm
If by "complex algorithm" you mean SHA-256, then it is not so complex. In fact, many altcoins, which are based on Proof of Work, use much more complex algorithms. In general, the faster a given hash function is, the easier it is to run a non-mining node, SPV node, or any other user-based client (which means, that many "ASIC-resistant" altcoins are also "user-resistant", if their hash functions are too complex, because then, chain verification is more costly, than it should be, if for example checking 10k blocks is as hard, as mining a single block). And SHA-256 is not so complex: it is only a little bit of math, executed in a loop, and designed to rely on avalanche effect, on Merkle–Damgård construction, and on ARX model (Addition, Rotation, Xor, combined together). Some topic about hash function complexity, where SHA-1 is analyzed: https://bitcointalk.org/index.php?topic=5402178 (here, SHA-256 is very similar, there are different constants, and some math operations are changed here and there, but if you can understand SHA-1, then you should also understand SHA-256).

Quote
If this ever happens, will Bitcoin be able to survive in the market?
It is technically possible to make Proof of Work scripts. For example: "OP_SIZE 60 OP_LESSTHAN OP_VERIFY <pubkeyToBeBroken> OP_CHECKSIG". Instead of 60, different numbers can be used, to set a difficulty for moving a given coin. Which means, that even if the main chain stops, then still, a chain of unconfirmed transactions can still be protected by additional Proof of Work, chosen by users, and then, the chain of signatures can still move forward, even if Proof of Work won't be present in block headers, but in transaction scripts instead.

Currently, I don't know how to combine it with existing block headers, so it does not use Merged Mining, but future Script updates, like OP_CAT, can make it possible. But even in today's code, you can protect your scripts with additional Proof of Work, which means, that you can have a chain of unconfirmed transactions, that is hard enough to overwrite, even if ASIC miners are gone for whatever reason.

Quote
And does Bitcoin depend entirely on miners?
No. Bitcoin is not ruled by miners. And there are many good reasons, why everything shouldn't depend only on miners. For example: if all miners would decide, that instead of halvings, there should be doublings, and the basic block reward should explode from 50 BTC, up to 21 million coins per block, would you use that coin, even if it would be covered by a lot of Proof of Work?

Quote
do you think there would be a chance for illegal transactions to occur if mining were to stop?
As long as nobody will find any bug in the code, like it was during Value Overflow Incident, no honest nodes will ever accept some invalid transaction. If you have a lot of people, running similar, or at least compatible code, then they form a community, focused around a given coin. In case of disagreements, they are currently solved by miners, by telling you, which history is correct. But: each full node checks everything, so you cannot just send some completely invalid transaction, and expect it to be accepted by another party, if you don't know any serious bug in the code, which could allow such things.

So: if you have two or more valid alternative chains, then miners can tell you, which one is correct. But: they can only pick between equally valid ones. They cannot generate random bytes, and expect it to be accepted. Every signature should be valid, every script should follow consensus rules, and you can only confirm another alternative valid chain, which means, that attackers can probably focus on just reversing their own transactions, because it is the easiest attack vector, if you have all private keys for transactions you want to change.

Quote
But imagine what will happen to Bitcoin if Bitcoin mining is completely stopped?
Then, the chain will halt. And then, users will be forced to pick some software, which will allow them to transact. Which means, that collective effort to produce the next block, will eventually push the chain forward.

Basically, mining exodus will just expand the time of the block from 10 minutes, to something much longer, depending on how much the hashrate would drop. Which means, that if 50% of mining hashrate will be gone, then you will have just 20 minutes per block, but the chain will still move forward. Which means, that even if we would have 99% of hashrate gone, then you can still expect to see something around one block per day (because we have something around 144 blocks per day).
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Future Proof Bitcoin Storage: A Taproot Vault with Multi-Era Spending Paths...
by
stwenhao
on 26/06/2025, 10:28:56 UTC
⭐ Merited by vapourminer (1)
Quote
whether SHA256 or HASH160 is more ideal for the entropy part
Every TapScript output uses SHA-256 internally, so if you keep using that, then it should be fine. If SHA-256 will ever be broken, then it can break everything, including Proof of Work, all ECDSA signatures, and also large parts of the Internet behind Bitcoin. Which means, that many things already rely on SHA-256, and you can do that too, because if the world will burn, then the security of your coins will be doomed anyway, if the attacker will be able to overwrite Proof of Work, which created your coins in the first place.

Of course, SHA-256 can be patched in a similar way, as SHA-1 was, by making a hardened version, which will protect it only from particular attack vectors.
Post
Topic
Board Development & Technical Discussion
Re: Bitcoin Core October Upgrade
by
stwenhao
on 26/06/2025, 07:28:13 UTC
Quote
People will still mint, trade, send, receive their dick pics and fart sounds on-chain on the Bitcoin blockchain simply because "they can".
Of course. But then, people willing to use Bitcoin as a P2P money will simply move to other subnetworks, while leaving on-chain spam, where it currently is. And then, you will have a choice: use the current client, and process all of that money-unrelated spam, or upgrade your client, and focus on monetary transactions.

So, it is only a matter of making enough people angry enough, to develop that kind of solutions. Satoshi was angry, because of fiat currencies, and that pushed him to make Bitcoin in the first place. If fiat currencies would be better, then Bitcoin wouldn't exist, because then, there would be no need to make it (which you can also read directly in the Genesis Block, how "second bailout" was literally the input to create Bitcoin).

And if NFT enthusiasts will abuse the chain more than they should, then some developers can be pushed to their limits, and make a money-only-based network, on top of Bitcoin, which would be designed, to explicitly exclude other use cases. Technically, it can be done, it is only a matter of pushing people enough, to start discussions, similar to what I linked, and bring changes like that into reality.

Also, as long as spammers are strong, such changes would be just optional. I don't want to force everyone to follow my rules, but note, that ideas like dropping UTXOs can come from developers as well. And in that case, they can make it mandatory, by turning them into soft-forks. And then, I don't know, what will be the end result, but I guess the optional path will be taken anyway, and just P2P money enthusiasts will move to their own subnetwork, where they will focus on money, and everyone else will use the official version, which would allow a lot of spam, and would have heavy requirements to run a node, if you compare it with other alternative, more spam-resistant clients.

For now, my plan is to still stick with Bitcoin Core, but as you can see, by pushing more spam on-chain, many people can be convinced to use a different implementation, or to run their own tools on top of Bitcoin Core (which is for example how Paul Sztorc wanted to introduce sidechains, by making his "Core Untouched Soft Fork" client). If there will be more spam, then such things will just happen more often, and some people will start using different implementations, to make Bitcoin usable with lower resource requirements, and fight with centralization pressure, made by spammers.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Bitcoin Core October Upgrade
by
stwenhao
on 26/06/2025, 04:22:06 UTC
⭐ Merited by vapourminer (1)
Quote
but if we consider the fact that nothing will stop the dick picks and fart sound lovers, what would be a better solution than that?
I already answered both questions: first, it is not true, that "nothing will stop" it. And second, I already shared a link, describing what could happen, if people will abuse the chain too much: https://delvingbitcoin.org/t/dust-expiry-clean-the-utxo-set-from-spam/1707

See? Here and now, every node have to process the whole UTXO set. But, things can be changed in future versions, and node operators can decide, that they don't want to store everything. Today, we have full archival nodes, and pruned nodes. Currently, you can bring around 600 GB of historical data into something like 10 GB of the UTXO set.

However, if there will be more and more spam, then the UTXO set can grow much faster than today. It can take 20 GB, 30 GB, and so on, and so forth, until reaching the point, where the size of historical data will be comparable with the size of the UTXO set, and then, pruning won't do anything good anymore, and will only block you from bringing new nodes to the network.

And then, node operators can decide to implement proposals like linked above, which would allow them to process the subset of the UTXO set, and stay compatible with the rest of the network. Which means, that if there will be too much spam, then it is possible, that it will make enough developers angry, to implement solutions, which will keep spammers away from money-based transactions.

For example: imagine that today, you can visit some block explorer, and it will show you some on-chain posted image. But if there will be too much spam, then imagine that future version could require users to store their own images locally instead, because new nodes could refuse to store non-consensus-related data at all, or even store any historical data at all, and require all users, who will want to move their coins, to provide more data during spending.

And then, if exploring the full history won't be that easy, as it is today, and if everyone will be forced to keep those things locally, or else such coins will be unspendable, then it will stop a lot of people from spamming, and only the most persistent ones will keep doing that. If people will find out, that Bitcoin is not a cloud storage, and you simply cannot be a leecher in this P2P network anymore, then old NFTs will simply behave like old torrents with no seeders.

So, the question is: do you want to push Bitcoin in that direction? Because if someone is pro-NFT, then that person should be aware, what could be a consequence of using Bitcoin for every blockchain-related thing, instead of focusing on payments. Simply, non-monetary use cases can be punished in the future, and then, the whole monetary activity can be moved into some subnetwork, while leaving mainchain people with a spammed chain, which is not used to move any coins anymore.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Bitcoin Core October Upgrade
by
stwenhao
on 25/06/2025, 07:43:58 UTC
⭐ Merited by ABCbits (1)
Quote
what's the lesser evil? Data that's embedded by way of OP_RETURN, or data embedded in the UTXO-set?
OP_RETURN is better, but it is still far from being the best way to handle it.

Quote
and that it couldn't be pruned at all?
If new nodes will remove that data from their storages, and will keep only a proof, that a given coin is there, then users will have to provide not only signatures, but also public keys, when they will want to spend their coins. And if data converted into public keys leads to unknown or unspendable private keys, then such things would never be touched. And if less and less nodes will provide such data, then there will be less incentive to push them on-chain in the first place, when accessing historical transaction data will be harder than today, and if the effort of doing that will be shifted from nodes to users.

So, in general, it is the question, if some people want to make enough developers angry, to move dust expiry from theoretical discussion land into practice: https://delvingbitcoin.org/t/dust-expiry-clean-the-utxo-set-from-spam/1707
Post
Topic
Board Development & Technical Discussion
Re: Two full nodes running and a couple of questions!
by
stwenhao
on 24/06/2025, 13:36:09 UTC
Quote
For example, a set of transactions that occurred around July '09 where I consolidated 200 addresses into a 10k coin address for the "cool" factor when a friend of mine asked if it was possible to put that many coins into one address.
Something like that? https://mempool.space/tx/e67c7cef9c59167046bee99a961a4ca75137c5ed4b697b30dc6e752ff1d50ecc

In general, if you still have the same wallet, then you can sign messages from the old keys, even if coins are no longer there.

So, it is possible to for example sell coins in the past, and prove now, that you had them. But you should be careful, because even if it is "cool", then by correctly proving, that you had for example 10k coins, some people may want to attack you, if they will think, that you secretly still have a lot of money.
Post
Topic
Board Development & Technical Discussion
Merits 4 from 2 users
Re: Bitcoin Core October Upgrade
by
stwenhao
on 24/06/2025, 12:59:16 UTC
⭐ Merited by ABCbits (2) ,vapourminer (2)
Quote
but the point is, isn't it better for the network
It is better, but far from sufficient in the long-term. And there are ways to timestamp any given data, without abusing Initial Blockchain Download. If users will abuse the chain too much, then just encouraging people to switch to OP_RETURN won't be sufficient, and then, there are more things, which can be done, for example by making more lightweight nodes, which would require less resources, and which would accept some proofs, instead of storing everything.

In general, I expect there will be some abuse, and some people will make a lot of unspendable outputs, no matter how often they will be encouraged to use OP_RETURN instead. And then, if it will be needed to make further changes, to stop the abuse, then they will be taken, when node runners will start running out of resources, and if there will be a need to encourage more people to run nodes. Because here and now, many people don't want to process over 600 GB, just to get the ability to share the chain, and introduce new nodes to the network. Which means, that if the chain will grow too much, then further actions will be needed, to keep the network decentralized enough, and to scale it properly.

Quote
Besides, it doesn't change the fact you still need to download and verify whole block first.
You need that only in the current implementation. But it can be changed in the future, and it will change, if more people will want to do something with the problem of spamming the chain. Because here and now, the system is wide open to non-standard transactions, which will push a lot of data, and which will send coins from zero satoshis to zero satoshis, and every full node will be forced to process that Bitcoin-unrelated traffic. There is no UTXO set size limit, and there is no total chain size limit. And people don't have unlimited resources, so such things can be restricted in the future, to allow running full nodes by enough separated entities, and limit that kind of centralization pressure.
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Bitcoin Core October Upgrade
by
stwenhao
on 23/06/2025, 17:21:34 UTC
⭐ Merited by garlonicon (1)
Quote
isn't the part of the block where the data resides by way of OP_RETURN prunable?
If you don't care about Initial Blockchain Download, then it is. But if you do, then in the current implementation, new nodes require downloading all data from 2009, up to today. As long as it is the case, there will be a problem with spamming the chain. It is technically possible to implement things in a way, where the exact data from OP_RETURN won't be needed to bring back new nodes to the network, but today, it is not yet implemented.

Quote
That's probably better than embedding data where it can be a long-term problem for the network, no?
Yes, storing things in OP_RETURN is better, than storing them elsewhere. But still: committing to data, without pushing them on-chain, is even better. And as long as you have to download and process each and every OP_RETURN, to synchronize the chain, it is a problem, no matter how and where things are stored.

In general: being forced to process everything from 2009, up to today, is a problem, which will be more and more urgent, as more time will pass, no matter if blocks will be filled with regular transactions, or with just data pushes. And moving the responsibility to keep that data, from the network, to the user, is technically possible. Then, things could be pruned by all nodes, and users would need to provide more data, when spending their coins, instead of relying on nodes to keep the full history forever.

And of course, some people may want to not accept future upgrades, and still store and process everything. Of course they can. But as it will be more and more costly, the incentive to upgrade will grow, and eventually, people will do that, to not store terabytes of historical data in the future.
Post
Topic
Board Development & Technical Discussion
Re: Premined Bitcoin Testnet Coming Soon?
by
stwenhao
on 23/06/2025, 03:58:33 UTC
Yes. Related topic: https://groups.google.com/g/bitcoindev/c/iVLHJ1HWhoU

In general, people first thought about hard-forking existing testnet4, and implementing fixes on top of that. But: if testnet coins are traded, then the whole chain can be started from scratch as well. And if it makes writing code easier, and allows to remove some additional logic, like 20 minute minimal difficulty rule, then people are more willing to implement it like that.

Also, testnet5 with premined coins will probably be traded as well, so maybe the whole idea of testnets will be abandoned? Who knows? One way is to flood users with new testnets, to make new test coins equally worthless. Another option is to focus on signet or regtest, and remove testnet support completely from Bitcoin Core.
Post
Topic
Board Bitcoin Wiki
Re: Bitcoin Testnet page is outdated. (v4 launched)
by
stwenhao
on 22/06/2025, 12:10:08 UTC
Quote
Does the difficulty still reset to 1
Yes. It may be changed in testnet5: https://bitcointalk.org/index.php?topic=5543921

Quote
I'm not talking about the timewarp loophole that's been closed.
The timewarp was fixed during difficulty adjustment. But mining blocks on CPU is still possible in testnet4.

Quote
In fact I need to create an entirely new page for that.
Yes, different testnets have different rules. If there is enough content, then it can be splitted into several pages.
Post
Topic
Board Meta
Merits 1 from 1 user
Re: Don't we think is not best time for theymos to address us of using AI
by
stwenhao
on 22/06/2025, 11:41:05 UTC
⭐ Merited by vapourminer (1)
Quote
I've been thinking for a while now that all this anti-AI sentiment basically boils down to that: that people are too worried about losing their weekly payment because of AI.
I think it is much simpler: people are worried, that there will be no human-written texts anymore. Now, if you want to talk with AI, then you can do it directly, and be sure, that no humans are involved. And if you want to talk with humans, then you can visit some forum, and be quite sure, that some particular members never used AI to write their posts, and are not planning to post AI-generated content as their own in the future. However, if AI usage will be allowed and promoted, then soon, you will have no humans to talk to. The Internet will be dead, and full of bots, willing to reply 24/7.

Quote
it's feels like you are reading garbage despite it's grammar and sentence perfection
Not only that: the answer often seems to be correct, but is completely wrong. When I asked AI about Schoof–Elkies–Atkin algorithm, then first replies were more or less correct. But when I tried to get some real examples, and step-by-step calculations, then all of them were completely wrong, no matter how deep AI tried to analyze it.

Also, it is not that surprising, that some things are unsolvable by AI: if you have hundreds of humans, that wrote on forums in the past, that "this topic is hard", then you will get that kind of response from AI, instead of getting anything useful. Because the model was trained on some training data. And if there were a lot of texts like "just google it", or "this is too complicated", then this is exactly what you will get from AI, because it was trained just on that, and it cannot invent something on its own, if it was absent in the training data.

Another thing is that many AI models are currently much worse, than they were in 2022 or 2023. And the reason behind it is quite simple: in the past, we had a lot of humans, talking with AI bots, so AI was progressing very fast. But then, when people started feeding AI bots with AI-generated content, then we reached bots, talking with bots, and the quality dropped significantly.

By the way: many times it turned out, that by just creating a network, which pretends to use AI, and involving real humans instead, you can achieve much better answers in some cases. I wonder, how many experiments like that were performed during early stages of AI boom: because sometimes, some replies were really good, and looked like they were written by real humans. And now, for exactly the same prompts, no AI models can produce content with such quality anymore.
Post
Topic
Board Meta
Re: My AI experiment on the forum
by
stwenhao
on 19/06/2025, 15:03:54 UTC
Quote
but the problem is that it came out in the detectors as 100% AI generated or almost, when that is a false positive since the content had been created by me.
It is not a bug, it is a feature. If you use AI too much, then you start behaving like bots. And then, you may be detected as a bot, or have problems with CAPTCHAs.

Quote
Also ask AI’s opinion on what you are about to do.
AI can tell you everything you want to hear. It can give you a list of advantages of breaking hash functions. It can tell you that stealing content is a good thing. If your moral compass is AI-based, then it usually won't protect you from many mistakes: https://www.youtube.com/shorts/eVaj8YIS0bc
Post
Topic
Board Development & Technical Discussion
Merits 3 from 2 users
Re: Raw hex data of the prenet genesis transaction
by
stwenhao
on 19/06/2025, 07:05:16 UTC
⭐ Merited by vapourminer (2) ,garlonicon (1)
Does anyone know, how to reproduce Satoshi's seed, which was used to initialize his random number generator, when he tried to mine "bnNonce" in the prenet coinbase transaction in 2008?

Source code: https://bitcointalk.org/index.php?topic=382374.msg4108762#msg4108762
Quote
Code:
bool BitcoinMiner()
{
    printf("BitcoinMiner started\n");

    SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_LOWEST);



    CBlock blockPrev;
    while (fGenerateBitcoins)
    {
        CheckForShutdown(3);

        //
        // Create coinbase tx
        //
        CTransaction txNew;
        txNew.vin.resize(1);
        txNew.vin[0].prevout.SetNull();
        CBigNum bnNonce; // this nonce is so multiple processes working for the same keyUser
        BN_rand_range(&bnNonce, &CBigNum(INT_MAX));  // don't cover the same ground
        txNew.vin[0].scriptSig << bnNonce;
        txNew.vout.resize(1);
        txNew.vout[0].scriptPubKey << OP_CODESEPARATOR << keyUser.GetPubKey() << OP_CHECKSIG;
        txNew.vout[0].posNext.SetNull();
Here, we can see "695dbf0e" as "bnNonce". It is supposed to be random, but it is only some 32-bit number, so there are not so many values to check. And also, it comes from "BigNumber" library, which is also used for other purposes. So, is the same randomness used to generate the private key for "04 d451b0d7e567c615719a630b9f44632a0f34f5e7101f9942fe0b39996151cef1 0a809c443df2fab7cd7e58a3538cd8afd08ccfaa49b637de4b1b383f088ad131", or is it somehow separated? Because if it is connected, then potentially, this private key can be recovered.

Also, if the source of randomness is just some timestamp from 2008, then it could reveal, when exactly this public key was created.
Post
Topic
Board Development & Technical Discussion
Merits 5 from 3 users
Re: What exactly is the maximum message length in OP_RETURN?
by
stwenhao
on 18/06/2025, 17:12:22 UTC
⭐ Merited by d5000 (2) ,vapourminer (2) ,Mia Chloe (1)
Quote
What exactly is the maximum message length in OP_RETURN?
It is defined by the maximum size of the block. See: https://mempool.space/tx/516c63376556d87c4779033327184ee00a08c4c14498e14673357ce4a791406b

So, in practice, it is something slightly below 1 MB. And if OP_RETURN is placed in unexecuted OP_IF branch, behind P2WSH, then it can take slightly less than 4 MB.

Quote
but what is the maximum amount where any block containing it would be considered simply invalid?
It would happen only when transaction size would exceed the maximum block size, or when OP_RETURN will be executed in witness space.

Quote
because I remembered that in the bitcoin developer guide, the maximum was 80 bytes, however some sources say that it's 75 or even 40
There are standardness rules. And it is set by default to 83 bytes, where one byte is taken by OP_RETURN, and two bytes are taken to declare the size of the stack push. And if you ignore Script decoding, then in practice, you can use one byte for OP_RETURN, and 82 bytes for anything.
Post
Topic
Board Project Development
Merits 1 from 1 user
Re: "Proof of Work" - A game about the history of Bitcoin
by
stwenhao
on 15/06/2025, 11:37:03 UTC
⭐ Merited by askii (1)
There are three interesting constants used in "split_mix32" function: 0x9e3779b9, 0x21f0aaad and 0x735a2d97. I wonder, how exactly they were created. For the first one, I have some clue:
Code:
0x9e3779b9 * 0x19e3779b9 =  0xfffffffee35e67b1
0x9e3779ba * 0x19e3779ba = 0x1000000011fcd5b24
But for the rest, I have no idea.
Post
Topic
Board Meta
Merits 1 from 1 user
Re: My AI experiment on the forum
by
stwenhao
on 13/06/2025, 20:32:05 UTC
⭐ Merited by vapourminer (1)
Quote
I haven't move to the dark side, as i told before i will not use AI again on the forum, you can be sure of that
Good for you, because you probably can't even imagine, how much worse it could be. If you allowed some AI to have unrestricted access to your account, or in general, if you allowed posting any content, then think about jailbreaks. Think about convincing your bot to post CP, or other strictly illegal content. And if all of that was done from some clearnet IP, then guess who would be chased by authorities, when needed.

Quote
I'm trying to show to the community where we are about AI
I can tell you where we are, when it comes to some paths, which are explored from Cyber Security perspective. Many people are no longer interested in just generating some AI content. It became too easy, and there are too many tools for that. The more interesting thing to explore, is how models were trained, and what was the original, unmodified training data.

There are prompts, which can give you completely unmodified training data, just as a plaintext. The whole thing, which AI currently does, is just putting more noise on training data, and mixing it to become hard to recognize the original input. However, data recovery is possible. Many things are archived on many pages. Dots can be connected. And then, if you know, how the "AI noise" is generated, then you can remove it.

So, what happens if someone can uncover the truth? For example, it can very ofter turn out, that the training data is some copyrighted material. Taken without asking anyone for permission, and mixed to hide traces. However, if you can prove, that some licenses or other rights were violated, then guess who will be chased. There were examples, where people read newspapers behind paywall, by just asking AI to complete some parapgraphs, taken from prefaces, which were available for free.

Which means, that you are now worried about users, who are unaware, where we are, when it comes to "AI progress". But you seem to completely ignore the fact, that your great AI can be used against you, and because you let it running, you gave it your own digital identity, and you are paid in signature campaigns, then you are the one, who can be legally accused of many things, and chased by those, who will know, how to jailbreak your AI.

Quote
we must be ready for that or AI will kill the forum
It probably won't "kill" things. It will slow everything down. There are lots of open source projects, which are now constantly crawled by many AI bots, from different IP addresses, by different companies, and people are trying to milk all possible data out of them. But guess what: many sites are putting Proof of Work protections in place.

Which means, that if you want to make browsing bitcointalk harder, if you want to see more cloudflare-like stuff, and if you want to encounter more CAPTCHAs, then go on. Use AI. Abuse it more. Then, not only you will make everyone else's experience worse, but also yours, if people will put more protections in place, and you will be left in a graveyard, filled with other AI bots, because you wouldn't know, where real humans went, when you didn't pay attention.

Quote
i was expecting to get caught by the forum staff
Maybe you didn't notice, but many discussions moved from bitcointalk to other places. It is no longer a central place for developing things. It is rather a playground, when many people experiment, and talk to each other, but there are more serious places, for more serious discussions. Maybe that playground is important, maybe it is deeply archived, and censorship-resistant, but nonetheless, it is still a playground.

Also, why some people should try to catch you, if they can ignore you, and let you taste your own medicine, by moving you into some echo chamber, when you will talk with other AI bots, thinking that you outsmarted other people, while you are just shadowbanned? This is also the way to fight with AI spam: by putting spammers together, so they can be properly isolated from the rest of the community.

Many times, things are not moderated, because people just click "ignore" button, instead of "report".
Post
Topic
Board Meta
Merits 4 from 1 user
Re: My AI experiment on the forum
by
stwenhao
on 13/06/2025, 15:36:14 UTC
⭐ Merited by vapourminer (4)
Quote
if the forum detects the AI
Yes.

Quote
how far a user can go with AI abuse
As far as with every other account. On some sites, there are "bot" privileges, which can do more things, and can be used to automate some tasks. But as long as you don't have additional access rights like that, you are limited to a standard account, and you have to keep all rate limits in place.

Quote
we can build better tools for the AI detection
Some ways of detecting, if you are talking to AI, or to a real person, are too invasive to be used. For example: AIs don't swear, if they don't have to. By just checking, how polite is someone's post, it can be detected with quite high accuracy, which posts were AI generated, and which were created by real humans.

Another option is to make an AI exploit, and post a sentence, which would quickly reveal, that someone is using AI. Sometimes I accidentally triggered such things, and then the bot for example said the truth, and asked about removing his account (and then kept posting, as if nothing would happen).

But note one important thing: if you use AI too much, then AI will be also abused against you. So, be prepared for a situation, when you will ask some serious technical question, and developers will send you some AI-generated response, instead of giving you some honest answer, or saying the f-word.

Quote
If technical concepts appear, explain them in an easy way, without complex words.
If you use AI against users, then some of them will be mad, and use AI against you. Also, you just made your bot tech-illiterate, by putting that sentence in your prompt.

Quote
Personally I prefer…
Thank you for making AI detection easier! If you put some phrases like that in your prompts, then users can easily search just for that, because there is 90% chance, that AI will keep repeating it over and over again, instead of using "its own" words (everything AI says is just a mirror of some training data, by the way).

Personally I prefer not using AI too much, because then someone could ask about some passwords, or other sensitive data, and your bot will reply faster, than mods will take it down. And then, in archives like https://ninjastic.space/ everyone will see it anyway, even if the whole topic will be nuked.

To sum up: it looks like you are playing with matches, without understanding the consequences. AI won't conquer the world by replacing humans with humanoidal androids, like in Detroit game. Instead, users will rely on AI more, and more, and more, and then, human bodies will just act like slaves, addicted to using AI, and fulfilling every command of AI companies.

So, if you want to be a slave, then just blindly trust your AI. I already saw employees getting their jobs by using ChatGPT. And I can tell you, how they behave, when they are hired: they are worried about their future, they feel guilty and have a lot of doubts about their own skills, and they adjust their human behaviour to the bots they talk with.

"who keeps company with the wolf will learn to howl"
Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: Compute Z with rs and the priv key
by
stwenhao
on 13/06/2025, 15:08:10 UTC
⭐ Merited by mcdouglasx (1)
Quote
Is it possible ?
Only knowing "z*G" is possible in that case, without knowing "z" alone.

But I wonder, why you ignored other things. Are you AI? You should understand previous post, if you are not, and it should be sufficient.