Hello forum users,
I am submitting my application to become a merit source.
Why?The reason is that 90% of the time, I'm out of sendable merits. I usually award all my sendable merits as long as they're produced (because of the merits received) and I don't stack merits. Therefore, I can't send merits when I want to.
Application process:According to the following post, to be a merit source you should:
1. Be a somewhat established member.
2. Collect TEN posts written in the last couple of months by other people that have not received nearly enough merit for how good they are, and post quotes for them all in a new Meta thread. The point of this is to demonstrate your ability to give out merit usefully.
3. We will take a look at your history and maybe make you a source.
For (1), I'm not sure what an established member is; it seems subjective, but I think I've made significant progress in this area.
For (2), here's a list of constructive posts that haven't received a lot of merit, or even if they have received a little, I'd like to be able to also award them a little merit myself as well:
Links and Summary:Quotes from the Posts:
Knots does not stop the spam from entering the blockchain, but it prevents it from being stored on your node which can remove you from liability in cases like what I mentioned.
This only works if the spammer is kind enough to use OP_RETURN or the Taproot/Ordinals method.
If he uses a collection of fake pubkeys (Stampchain ...), then you cannot prevent any data to be stored on your node. Not only will it be stored in the blockchain data (which you can prune later), but also in the UTXO set which you need to validate transactions.
That's why I consider this "incentive problem" so critical, and judging from the Bitcoindev discussion it was one of the main reasons triggering the decision, alongside with the problem that standardness "violations" are currently happily added by miners and thus there is danger that there's no "common mempool policy".
There was a proposal to implement a check that pubkeys must be "real" (i.e. be on an elliptic curve), which means additional ressources needed for tx validation, but it could be worth it if it really was a solution to the problem.
This would however make the spam only more expensive for the spammer as he can grind through "real pubkeys which contain his spam data", so if he still wanted damage he could. Even Luke-Jr seems to have admitted (he didn't answer anymore to that thread in the discussion) that the additional validation effort wasn't worth it.
My hopes in this field lie basically in new methods to store blockchain, where you could store a set of proofs without storing the complete data. Haven't heard if there was progress on this though.
Anyway in this case, incentives are useless. The consideration is for malicious entities, they don't care about your incentives.
This is of course true. Neither of these methods/filters could do anything for a really malicious party, as they would use the most harmful method no matter what, and thus resorting to Stampchain and friends even with the OP_RETURN door wide open. But the not completely malicious but instead profit-driven folks could behave according to these incentives, see below.
The Taproot thing could have been solved, but alas we tend to open more doors than we tend to close for some reason.
I dispute the first part of your sentence. Yes, filters could have worked for some time, but it would have led in a cat and mouse game and much higher development/maintenance cost for Core.
And to the "open more doors": The fake pubkey door is the widest one open, and the most harmful. Yes, it is a bit more expensive than Taproot, but Taproot has also some disadvantages. The idea is thus, at least, to nudge the "de facto malicious" spam into OP_RETURN (i.e. not with the intention to attack Bitcoin, but to make profit with NFTs and stuff), before they get the idea to use the Stampchain method and we get
much higher node operation costs.
I have already run `bitcoind -reindex-chainstate` about 3 times this week. The node then runs fine, until this happens again.
Since that command can consistently fix it, it's your "
UTXO set" that's always been being corrupted. (
it should be in the log)
Could be during the time when the cached chainstate is being written to disk.
Or even after stopping Core when the SSD is moving data from its/system cache to the actual drive.
For the former, that is usually caused by a too high dbcache with high dbbatchsize, but it's mostly an issue if the drive is suddenly unplugged.
For the latter, it could happen if you unmount the drive right after stopping Core, but AFAIK, it will not finish unmounting until it's finished.
This shouldn't be an issue if you always pay attention to the message when you unmount, like when sometimes it doesn't actually unmount but proceeded to unplug the External SSD.
...Unless it's a hardware issue.
Of course, it's better to ask hardware-related stuffs like how your SSD works on computer hardware forums.
Why is the data corrupted so often? Is it normal?
Obviously not normal, all nodes would get their chainstate corrupted often just like yours if it's the norm.
Start by checking the drive for "
bad sectors" because it's the easiest to test. Samsung should have its official SSD tool to do that.
CPU and Memory are quite troublesome to test, those appear to work fine when not stressed enough.
Those are more of a computer technical issue than Bitcoin-related though so search the internet for related tutorials.
I installed it from the Tor repository and got it working. I was able to run bitcoin-qt with Tor since it has the P on the GUI and im only connecting to .onion addresses in the Peers window. However, some things still not fully working it seems.
1) The RPC does not work when I use Tor for some reason. I try
./bitcoin-cli -datadir=path getnetworkinfo and it says:
error: timeout on transient error: Could not connect to the server 127.0.0.1:18332
Make sure the bitcoind server is running and that you are connecting to the correct RPC port.
There is no cookie so maybe I have the wrong settings on bitcoin.conf
For Tor, I comment everything except this:
rpcbind=127.0.0.1
server=1
proxy=127.0.0.1:9050
listen=1
listenonion=1
onlynet=onion
On some tutorials I saw they use bind= instead of rpcbind=, im not sure about that
For clearnet, commenting everything except this:
rpcbind=127.0.0.1
server=1
listen=0
Works, bitcoin-cli will run the commands and i see the cookie file. So it has to be something with the bitcoin.conf I guess that requires something specific for Tor
Another thing is, when I run with this the Tor setting settings I described, I don't see that an onion service is created. I think my node is not reachable. But I don't get it because from what I've heard BItcoin does not run with Tor unless your node is reachable when using Tor, but it was downloading blocks in Tor mode so I don't know.
Im supposed to see this in debug.log but not there:
tor: Got service ID XXXXXXXXXXX, advertising service XXXXXXXXXXX.onion:8333
And with
getnetworkinfo I get this:
{
"version": 290100,
"subversion": "/Satoshi:29.1.0/Knots:20250903/",
"protocolversion": 70016,
"localservices": "some number here with a c and 2 numbers",
"localservicesnames": [
"NETWORK",
"WITNESS",
"NETWORK_LIMITED",
"P2P_V2",
"REPLACE_BY_FEE?"
],
"localrelay": true,
"timeoffset": 0,
"networkactive": true,
"connections": 0,
"connections_in": 0,
"connections_out": 0,
"networks": [
{
"name": "ipv4",
"limited": true,
"reachable": false,
"proxy": "127.0.0.1:9050",
"proxy_randomize_credentials": true
},
{
"name": "ipv6",
"limited": true,
"reachable": false,
"proxy": "127.0.0.1:9050",
"proxy_randomize_credentials": true
},
{
"name": "onion",
"limited": false,
"reachable": true,
"proxy": "127.0.0.1:9050",
"proxy_randomize_credentials": true
},
{
"name": "i2p",
"limited": true,
"reachable": false,
"proxy": "",
"proxy_randomize_credentials": false
},
{
"name": "cjdns",
"limited": true,
"reachable": false,
"proxy": "127.0.0.1:9050",
"proxy_randomize_credentials": true
}
],
"relayfee": 0.00001000,
"incrementalfee": 0.00001000,
"localaddresses": [
],
"warnings": [
]
}
Also I do not get the onion_v3_private_key file that the guy in the video gets in /.bitcoin so im not sure in which state Tor is being run.. I mean it's connecting to other peers with onion addresses only and it's downloading blocks... so in theory it's working. However im not sure what im missing with that there.
Is it that it's working fine but im not reachable to other people? But again, listen=1 is enabled (since from what I can read it wouldn't even work in Tor mode) but I get 10 in / 0 out for connections (it has 0 incoming connections on that
getnetworkinfo because the node is fully synced and I guess once it's fully synced it barely needs 1 peer every x minutes to update blockchain). So im not sure what's up with this.
Btw, I get a clearnet IP with
getnodeaddresses[
{
"time": some number here,
"services": some number here,
"address": "some clearnet ip address here,
"port": 8333,
"network": "ipv4"
}
]
I just would like to know what's up with these since im not sure if it's wrongly configured and im connecting to people with a clearnet IP while recieving .onion addresses or something.
Recently, I shared a puzzle, based on Proof of Work inside Script. It is described in details here:
https://bitcointalk.org/index.php?topic=5551080.0In the shared puzzle, the private key, used to grind solutions, is equal to one. It is done on purpose, to make it easy to recreate by anyone, anywhere. However, Proof of Work can be used in many different places. It can be used to build sidechains, which would be directly pegged into Bitcoin. Then, from the perspective of on-chain node, sidechain transactions will be simplified into one-input-one-output chunks, with attached peg-ins and peg-outs. Any sidechain miner can take any fees out of sidechain users, and pay the rest to the mainchain miners. I think it is a good idea to show some example:
+-------------------------------------------------+
| Puzzle 0.00050000 BTC -> Miner 0.00053000 BTC |
| Alice 0.01000000 BTC Bob 0.00999000 BTC |
| Charlie 0.02000000 BTC Daniel 0.01999000 BTC |
| Elaine 0.03000000 BTC Frank 0.02999000 BTC |
+-------------------------------------------------+
Here, some unsolved puzzle, with some difficulty, is used as a transaction input. Any sidechain miner can start grinding it, by using SIGHASH_ANYONECANPAY to make sure, that anyone can put more coins in, or bump on-chain fees if needed.
Then, people joining the sidechain will provide their inputs, and people leaving the sidechain will share their outputs. In the example above, the on-chain transaction has zero fee, but it can be higher, if output amounts will be lowered (however, if sidechain miner is also a mainchain miner, then it can decide to prioritize sidechain transactions inside produced Bitcoin blocks). When it comes to sighashes, puzzle solver can use SIGHASH_ANYONECANPAY with SIGHASH_ALL, and other people, who want to join the sidechain, can use SIGHASH_NONE, to sign all inputs, and make sure, that they will be included, only if the Proof of Work puzzle will be solved. Also, they can put any commitment as their r-values inside their signatures, so it could be possible to validate later, if sidechain rules are followed correctly, and if nobody tried to steal any coins.
In this example, we can see the finalization transaction, which is executed every sometimes, depending on picked Proof of Work, which decides, how often new sidechain transactions are broadcasted. It can be done every three months or so, to make it aligned with other sidechain proposals, like
BIP-300 or
BIP-301, but it depends mainly on users, how much fees they are willing to pay, and how long they want to wait, to get benefits from batching, and pay lower fees, because of that.
In general, the minimal working example, is when every user stays inside sidechain. Then, the whole on-chain representation can be simplified into just this:
+---------------------------------------------------------------------------+
| SponsorA 0.00010000 BTC -> Puzzle 0.00050000 BTC -> Puzzle 0.00050000 BTC |
| SponsorB 0.00015000 BTC |
| SponsorC 0.00025000 BTC |
+---------------------------------------------------------------------------+
In this case, a group of sponsors can start putting their coins in, to transfer them from mainchain to sidechain. The puzzle can be very similar to the original, but committed difficulty and public key can be rotated on-the-fly, depending on things happening inside sidechain mempools. Sidechain users can keep making transactions between themselves, and sponsors can collect this information, and use a merkle root of the next network state, as the private key for grinded signatures (instead of using private key equal to one, like it is in my puzzle). Then, when some sidechain miner will find the solution, it can share a signature, signed with SIGHASH_ANYONECANPAY, and use any coins, to set any fees (or process it for free, if it can also mine mainnet blocks).
Then, during Initial Blockchain Download, Bitcoin nodes don't have to know about sidechain existence at all. From their perspective, there are just some signatures, which are just smaller than usual (which is visible in the Script). However, they don't have to verify the correctness of the sidechain merkle tree, it is treated just like a chunk of bytes, which is only checked from the perspective of ECDSA correctness, and no commitment behind it is ever processed by existing nodes.
For each sidechain, a single UTXO is all we need, to keep it running. All inputs and outputs are needed mainly for peg-ins and peg-outs, when people will want to go between sidechain and mainchain, or even jump directly from one sidechain puzzle to another. Nodes can keep signing different versions of on-chain transactions, similar to how Lightning nodes sign them. The final version is broadcasted to the mainchain nodes, when some sidechain miner can find a solution, claim the reward, push the sidechain difficulty a little higher, and commit the state of the whole sidechain, and make it committed on-chain. As long as everyone is staying inside, it is all about changing one 256-bit number to another, so on-chain transaction size is mainly affected by peg-ins and peg-outs, the internal state of the sidechain is transparent to all mainchain nodes, and can execute any rules inside, as creators would pick.
I think producing a new sidechain header could be compared to consuming a single transaction input and producing a single output in that case (everything else is related to peg-ins and peg-outs; if there are none, then one-input-one-output is all that is needed). Mainchain users can then see each sidechain block header, and check Proof of Work behind it, but everything else can stay inside sidechain. I guess it would scale better than Lightning Network, because then, transactions inside sidechain wouldn't require constantly closing and opening new channels, and could be simplified to just replacing one 256-bit number with another 256-bit number, used for the next puzzle.
Edit: Sidechains can be improved with
Optional Hourglass. Then, the private key can just represent the state of the whole sidechain (for example its UTXO merkle tree, or something similar), and sidechain miners can try to create stronger and stronger signatures, which could be confirmed at earlier, and earlier block height. Here is an example of a sidechain using Optional Hourglass envelope, and committing to the "Hello World" content:
SHA-256("Hello World")=d=a591a6d40bf420404a011733cfb7b190d62c65bf0bcda32b57b277d9ad9f146e
Q=d*G=0298C39AC0D91FF4CEA6E79AE5836E50868C47191BCA0FBFD2A6838D303665F506
And now, we can require at least 13150 ACKs, while also enforcing Optional Hourglass envelope:
decodescript 210298c39ac0d91ff4cea6e79ae5836e50868c47191bca0fbfd2a6838d303665f506ac
{
"asm": "0298c39ac0d91ff4cea6e79ae5836e50868c47191bca0fbfd2a6838d303665f506 OP_CHECKSIG",
"desc": "pk(0298c39ac0d91ff4cea6e79ae5836e50868c47191bca0fbfd2a6838d303665f506)#ryjv7lc4",
"type": "pubkey",
"p2sh": "2NEhtT2UPFkvRgMCQ4azmNMEGtVQfmy33vL",
"segwit": {
"asm": "0 d35986305ce10537dc781e795e734673035c4160",
"desc": "addr(tb1q6dvcvvzuuyzn0hrcreu4uu6xwvp4cstqk89zqn)#lgthsn5s",
"hex": "0014d35986305ce10537dc781e795e734673035c4160",
"address": "tb1q6dvcvvzuuyzn0hrcreu4uu6xwvp4cstqk89zqn",
"type": "witness_v0_keyhash",
"p2sh-segwit": "2N4ZPqUwLBg1FiBuUKyiVckfH63n71KamNG"
}
}
decodescript 7c8276937693025e3393b2757cab76a914d35986305ce10537dc781e795e734673035c416088ac
{
"asm": "OP_SWAP OP_SIZE OP_DUP OP_ADD OP_DUP OP_ADD 13150 OP_ADD OP_CHECKSEQUENCEVERIFY OP_DROP OP_SWAP OP_CODESEPARATOR OP_DUP OP_HASH160 d35986305ce10537dc781e795e734673035c4160 OP_EQUALVERIFY OP_CHECKSIG",
"desc": "raw(7c8276937693025e3393b2757cab76a914d35986305ce10537dc781e795e734673035c416088ac)#mrd0n6f8",
"type": "nonstandard",
"p2sh": "2Muhrn2Y5PgHQZRzL9fevf7j34s8aaXwmbg",
"segwit": {
"asm": "0 419e580de6345f1ebb6253682f62716098d3ad1bda5b631a752f2484c2342913",
"desc": "addr(tb1qgx09sr0xx303awmz2d5z7cn3vzvd8tgmmfdkxxn49ujgfs359yfs8fyrcn)#yqw84xvm",
"hex": "0020419e580de6345f1ebb6253682f62716098d3ad1bda5b631a752f2484c2342913",
"address": "tb1qgx09sr0xx303awmz2d5z7cn3vzvd8tgmmfdkxxn49ujgfs359yfs8fyrcn",
"type": "witness_v0_scripthash",
"p2sh-segwit": "2N1tpGp3eEKZgjxXesbk2RFebaLwqY6QWAF"
}
}
And then, people can try to mine addresses like
tb1qgx09sr0xx303awmz2d5z7cn3vzvd8tgmmfdkxxn49ujgfs359yfs8fyrcn constantly, while sharing their ACKs between themselves. The final winner is the miner, who will produce the smallest signature during three months.
Not merging this with my last post because it wasn't a double post-- rather the poster deleted their reply.
What's the interest in siding with the spam instead of trying to address it, especially if miners were bypassing intended transaction policy for profit?
Who is siding with spam? Intended policy by whom? The very people who advocated for this particular limit initially -- e.g. myself! -- support removing it now. I was harassed for years over the op_return limit and even subjected to threats. Where the heck were you then?

Certainly not out saying it was a good thing and that I'm not a terrible person for supporting it.
Aside, when I say spam here I'm just adopting the language of this discussion. I don't actually think it's a good description. Spam is a message sent to you that you didn't ask for and almost certainly don't want by a second party who hopes to profit from it and who paid essentially nothing to send it causing you to waste a lot of resources reading it. You win against the spammer if you don't have to read the message even if your computer processed it or other people read it.
This normal definition spam doesn't exist in Bitcoin except perhaps for dusting which almost no one cares much about, and isn't the subject of these discussions. The stuff that people are calling spam in Bitcoin is where two consenting parties transact with each other entirely consensually, and they pay a third party miner handsomely, with bitcoin, to process it. This irritates some Bitcoin users[1] because it consumes network capacity (like any other txn) and does so for the benefit of some activity the user deems to be not sufficiently Bitcoin related. To defeat it it isn't enough that the user personally don't see it (they wouldn't have anyways), but rather they must assure no one sees it/processes it because Bitcoin is a consensus system so as soon as one miner accepts a valid transaction all participants must accept it.
[1] including myself! the distinction is that I don't think being irritated by means it would be right to try to stop it or that I have the ability to stop it.
"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all."
-- H.L. Mencken This doesn't address pressure on small miners and centralization, it just makes it worse. The dangers down the line from such changes are significant.
In what way does removing this limit harm small miners or encourage centralization? What are the specific 'down the line' dangers you're referring to? Surely they can be enumerated if they're significant.
It's easy to enumerate the opposite:
With the limit in place, miners that bypass the limit will earn more income than others. They get the transactions via direct submission. There is little reason for someone to direct submit to small miners, since the large ones do the job and the small ones may well be (and hopefully are!) anonymous. Lots of software authors already somewhat prefer direct submission because submitting to an API fits better with JS-jockie programming practices than submitting to a P2P network, if the latter is also more reliable at getting their tx mined then convincing them to do otherwise is a losing battle. In any event the consequence of direct submission is that the biggest few miners make more income, an because mining is highly competitive by design this ultimately tents to push smaller miners into operating at a loss.
Similarly, when txn that will get mined don't get relayed in the public p2p network then miners that learn of blocks via the network (e.g. especially small and/or anonymous ones) then they propagate slowly. Slow propagation causes mining to be more race like and less lottery like. In a lottery you win proportional to your tickets (hashrate). In a race the fastest always or almost always wins. If blocks always transferred instantly then mining would work as it is ideally imagined like a perfectly fair lottery. Slowdowns increase stales for everyone but cause smaller miners to experience more stale blocks than larger miners, and as difficult adjusts larger miners make proportionally more than smaller miners. Another centralization pressure.
I don't think these points can be denied though it's not unreasonable to debate their significance. However, on the other side what is the downside to weigh them against? The limit is already functionally not enforced. The spammers already have alternatives that are better for them and/or worse for the resources needed to run the network. An argument that "spam is bad" doesn't advance the discussion about "is removing the opreturn limit bad?" when removing the limit won't increase the spam.
We all know when you make a bitcoin transaction, you're kind of competing for a limited amount of space in the next block and based on mining incentives we have to pay a fee to get it confirmed and the size of this fee isn't based on the amount of BTC you're sending but on the size and UTXO count of your transaction and how congested the network is. This is where the concepts of transaction weight and mempool depth come into play.
Transaction weight A transaction's weight is basically just a measure of its size but it's not a simple byte count. The concept of weight was actually introduced with the SegWit upgrade which significantly changed to how transaction data is treated.
A Bitcoin transaction is actually made up of two major parts;
- The core transaction data, which includes the list of inputs and outputs.
- The witness data which contains digital signatures that prove you have the right to spend the bitcoins.
To get more transactions into a block SEGWIT was used to introduce a system that gives the witness data a discount. The total size of a block is limited to 4 million weight units (WU). Legacy transaction data is heavy counting for 4 WU per byte but the witness data is actually lighter counting for around 1 WU per byte. This system incentivizes wallets to use SegWit enabled transactions which actually allows more of them to fit into a block and helping to scale the network. Your final fee is calculated by multiplying your transaction's total weight by your chosen fee rate (sats/vB)
and not actually (sat/bytes) most people make that mistake.Mempool DepthA transaction doesn't go directly into a block. First it gets broadcast to the network and sits in a waiting area which we know as the mempool. Every full node has its own mempool and they relay unconfirmed transactions to each other. So what miners do is that they look at the transactions in their mempool to decide which ones to include in the next block and because they are motivated by profit they will prioritize the transactions with the highest fee rates
not fee.
Now, the depth of the mempool is a measure of this waiting room's congestion. It's not actually a technical property of a single transaction but more like a dynamic state of the entire network.
Imagine the mempool as a long queue with different lanes for different fee rates. The highest paying transactions are at the front and the lowest are at the back. Now when we talk about mempool depth, we are referring to how many transactions are waiting in each of those lanes.When the network is congested the depth of the mempool kinda increases which actually means you may have to pay a higher fee rate to get to the front of the line. However if the network is less congested and the mempool is shallow a small fee has a very high chance of getting you a quick confirmation. This is why Bitcoin transaction fees are not fixed.
Congestion also explains why sometimes the depth you see at the time of broadcasting your transaction may not be the depth the transaction will be confirmed at.
Hello Bitcoiners!
I started my journey by getting inspired by
NotATether's Lightning Node challenge, after finishing my Bitcoin core Node run 14-day challenge. In this forum, there is already a Detailed guide from
Satofan44 on
🔥🔥 Complete GUIDE for Lightning Desktop Nodes since his tutorial was based on Ubuntu I am writing this on behalf Windows users since most of user in forum is desktop user also I saw those who perticipated on 14 days of bitcoin core node run they were mostly on Windows. So on behalf of them I'm writing this tutorial.
I followed several documantion most of them was outdated or some missing some part, after combining all issues I faced I note down all the problem hence I was able to run a final LND node for Lightning Newtwork
What is Lightning Newtwork?- The Lightning Network is a second layer built on top of Bitcoin that enables:
instant
lowfees
scalable bitcoin payments
Requirements- Windows PC (Windows 10 or Higher )
- At least 600GB free disk space for Bitcoin core full node required (unless you use pruned mode)
- Stable Internet and paitence
Phase 1 - Download and Sync a full Bitcoin Core Nore (if you have already done this you can skip this part)A fully synced Bitcoin Core node is required in order to work your LND node properly. However, if you still ensist to run on pruned mode be aware of these limitation
- Pruned nodes delete old blockchain data → LND can’t find some transactions it needs
- This breaks channel creation, backups, and some RPC commands
- save your txn details manually for queries
STEPS1. Download Bitcoin Core
👉
https://bitcoincore.org/en/download/2. Install it and run.
3. Let the bitcoin core run in background for fully synced since it will download 600GB worth of block data
4. This might be take you 24h to 3/4 days depends on you PC hardware composition and your internet speed.
5. After fully synced you Bitcoin core node follow this
6. Find your Bitcoin core node file directory/Path this could be like this - C:\Users\<YourUsername>\AppData\Roaming\Bitcoin
7. Or if you downloaded it another directory you can simply look for it like this
Windows+R paste this
%localappdata%\Bitcoin
and enter it will take you to you bitcoin core path.
8. Open
bitcoin.conf in notepad add this -
server=1
txindex=1
rpcuser=bitcoin
rpcpassword=your_secure_password
zmqpubrawblock=tcp://127.0.0.1:28332
zmqpubrawtx=tcp://127.0.0.1:28333
9. if you are running your bitcoin core in
pruned mode -
server=1
prune=20000
txindex=0
rpcuser=bitcoinuser
rpcpassword=your_secure_password
zmqpubrawblock=tcp://127.0.0.1:28332
zmqpubrawtx=tcp://127.0.0.1:28333
note - must change your password and your pruned amount according to your bitcoin core node run. It is suggested to run your bitcoin core node pruned - 10/20 GB . prune=20000 here indicating 20GB of pruned.
Phase 2 - Download , Install and Run LND node STEPS1. Download LND for Windows:
👉
https://github.com/lightningnetwork/lnd/releases2. Download this version - lnd-windows-amd64-v0.19.2-beta.zip [must download
windows-amd64.zip one there you will get lots of version for linux , rasbery pie so the specific name I mentioned download that one to avoid any un-neccessary issue]
2. Extract the folders component to
C:\lnd
[Create a folder name
lnd on
C:\ and extract the lnd component you downloaded recently you will get lnd.exe and lnd.cli there]
3. Create LND config folder in this path
C:\Users\<YourUsername>\AppData\Local\Lnd
4. inside that folder create a file name
lnd.conf [make sure the extension is not txt cause windows sometimes hides extension] and put this into that lnd.conf
[Application Options]
alias=MyLNDNode
listen=0.0.0.0:9735
[Bitcoin]
bitcoin.active=true
bitcoin.mainnet=true
bitcoin.node=bitcoind
[Bitcoind]
bitcoind.rpcuser=bitcoin
bitcoind.rpcpass=your_secure_password
bitcoind.rpchost=127.0.0.1
bitcoind.zmqpubrawblock=tcp://127.0.0.1:28332
bitcoind.zmqpubrawtx=tcp://127.0.0.1:28333
[you can change 'alias' this will be your node name , also make sure you changed the pass]
5. Restart your Bitcoin Core
Let's start LND6. Open command Promt -
cd C:\lnd
lnd.exe
On Windows, when using Command Prompt, you often need to specify the exact file name and path. So try this one if that not worked -
cd C:\lnd
.\lnd.exe
for first time it will ask to create wallet
7. Open another command promt and
cd C:\lnd
lncli.exe create
or
cd C:\lnd
.\lncli.exe unlock
We are specifying it by adding .\ in start and adding exe
it will ask for wallet password for creating wallet , put your wallet password. If you dont see anything on screen don't worry power shell/command promt dont show you pass for security reason. Save your recovery seed. Congratulation you did everything okay , LND is now running. It will take some to sync with bitcoin core.
8. To check your Lnd node status run this
lncli getinfo
you will see something like this -
{
"version": "0.18.5-beta commit=v0.18.5-beta",
"commit_hash": "4ccf4fc24c750d098cf24566ef4bbc0311c7d476",
"identity_pubkey": "0206abb79af738e8009dff2eeb78cac43441c54c32a65db87398a4903ffded7a50",
"alias": "MyLNDNode",
"color": "#3399ff",
"num_pending_channels": 0,
"num_active_channels": 0,
"num_inactive_channels": 0,
"num_peers": 2,
"block_height": 895957,
"block_hash": "0000000000000000000172ec1306a6b2f58314370aef2dd0573a1defadb478d7",
"best_header_timestamp": "1746794076",
"synced_to_chain": true,
"synced_to_graph": false,
"testnet": false,
"chains":
Notice:- Your node should be saying
"synced_to_chain": true if it false that means it's not yet synced with your bitcoin core , it will take some time to synced 5/10 min usually.
Pase 3- Let's Create Channel and Pay invoices on Lightning NetworkSTEPS1. For creating channel you need to fund your wallet first , you need to fund your wallet atleast 35k sats. 20k Sats for creating your channel and 10k sats for reserve . You will get back this 10k sats when you close your channel.
2. To fund your wallet let's get wallet address first
lncli newaddress p2wkh
now send 35k-40k sats to this wallet atleast.
3. Before creating channel connect with a peer
lncli connect pubkey@ip:port
4. You can find all node details here -
https://1ml.com/node 5. Top nodes require 100k sats to create a channel I created with
Blixt Wallet node you can try it -
.\lncli.exe connect 0230a5bca558e6741460c13dd34e636da28e52afd91cf93db87ed1b0392a7466eb@176.9.17.121:9735
6. After connecting , now create it -
.\lncli.exe openchannel --node_key=0230a5bca558e6741460c13dd34e636da28e52afd91cf93db87ed1b0392a7466eb --local_amt=20000 --private
we are creating private channel here , cause public channel requires 100k sats
7. You should see your funding txn there , it will take 3 block confirmation to create your wallet successfully.
8. Check your channels here -
.\lncli.exe listchannels
or if it's in pending
.\lncli.exe pendingchannels
Now lets pay some invoices 8. Grab a invoice or Public address from Nostr , Stacr news
lncli payinvoice <invoice_string>
it will something like this
lncli payinvoice lnbc160u1p5x5rlqpp5qnvmh2smde2mdayhnu8he20nkejxes3hw9k77036ce6t5kh5ve7qdqqcqzys xqrrsssp5lgfy8tfwnfmy6jvk57867zyganucsk9t3fnxug5sfcwegkkxt89q9qxpqysgqtr8htaxw9 avqa9ywn4qs47d5vxm44r7l2ssfmt7ch4u36yyqs9aru25psf5vuhlydgnfrysgd0zzq37dsuq0z4qa ndjlptgnl0p2lfqpgmrqg0
9. To create your own invoice
lncli addinvoice --amt=5000
Congratulation
if you pulled this all and succed
You may find some difficulties connecting to some public node it is recomended to use VPN , cause I faced this issue then tried with Proton VPN .It's basically was my ISP blocking peer to connect. Satofan44 asked me to collab with him , actually I used to so busy that I have to refused him. It took me a month to create and open channel and run a Thunderhub Webpage after runing my LND node. I am too lazy ig .
Maybe I could have miss a lot of things or my writing could be bad/not well structured please consider this. Apologies for any kind of inconvinience
Not really, because anyone can disable PM-notification-emails if they want to (but you have a point).
Yeah, but, that setting affects reception, not transmission. As in, if you disable e-mailed PM notifications, then the PMs you
receive won't be duplicated over SMTP, but the PMs you send can still be.
On Bitcointalk on the other hand, all I have to do it go to the website, my browser keeps me logged in, and I can read any PM. I don't even read the content of the email notification, because the layout is less clear than the actual PM. So I wouldn't miss it if it's gone.
Yup. That's kind of my point: If very few people have a genuine need (or at least, a deep appreciation) for the ability to be out-of-band sent HTML-stripped versions of the PMs they receive, then, I think the forum should take the message content itself out of any e-mailed PM notifications (for all the reasons I mentioned).
If it were up to me, I'd take out the subject, the sender, and the instant-reply link, too, but, I can't see getting a change like that past theymos.
How cool would it be if the forum has an easy to use client side encryption (like Protonmail)? PGP involves copy/pasting messages and even if I'd want to use it, it would be the rare exception amongst thousands of PMs. Privacy should be easy for mass-adoption.
By encrypting everything by default, any outside observer wouldn't know if it's sensitive or not. I don't remember where I read it, but: "nobody has to know I have nothing to hide".
Agreed. And it's something that I've considered doing more than once...
There are three stumbling blocks (that I can see):
(*) It would involve JavaScript and move Bitcointalk even further toward a state of not being able to work without scripting. (This doesn't bother me any, but, I'm aware that there are some no-JS folks out there that really bristle at being forced to enable browser scripting. I don't find their stance to be realistic, but, I can't say that I blame them for feeling the way that they do: Most programmers, and especially web developers, seem to have no problem with relying on a mutating nest of dependencies that they could never have written on their own, and therefore can't fully understand. You shouldn't accept a vouch from someone when it's about something that they don't understand. If you can't program a given thing from scratch, then you don't understand it.)
(*) It would break PM search. (But, I don't see this as a huge problem. When I originally made this topic, I was working on a filter-by-user patch for PMs. That patch slipped through the cracks and I forgot about it, but, I left it in a close-to-finished state, and if I finish it now and manage to convince theymos to merge it, then, I could see a lack of server-side PM search being much less annoying. Eventually, I could implement client-side search based on server-side user-filtering, but, its first-use/uncached bandwidth usage would depend on how many PMs you've sent to or received from that particular user. I've also got some ideas around re-basing the whole PM system on top of a rank-dependent amount of per-user API-accessible storage, and I could make something like that work really efficiently, but, those ideas are too involved to unpack here.)
(*) I forget the third point I was going to make. It was prolly good, though.

Anyway, when something gets complicated enough that I either can't see a way to very safely splice it into the existing software design, or I
can see a way to do that but I expect it to be a huge uphill battle to get it merged, then, my energy wanes and I try to turn my attention back to very small improvements that don't leave much room for argument.
I think what a lot of people don't really understand about me is that I'm in a very particular "mode" when I'm on Bitcointalk: I very rarely suggest (or code) the things that I personally want, because I realize that the things I want are radical, and I don't have the energy to argue for them in what I perceive to be a very change-resistant environment (I don't only mean the user base; I'm also referring to theymos, because, ultimately, things come down to, or are at least very affected by, what he personally likes and dislikes). I don't begrudge theymos his iron grip on Bitcointalk, because I understand it, and my own grip would be at least as tight if I were in his position, but, it leaves me in a situation where I know that I'm not going to be able to get things over a certain complexity-limit or even with a certain flavor past him. Unfortunately, I also know that I'm not really built for the kind of work that I get to do for the forum, and so I'm almost certainly going to run out of interest at some point and move on to things that I actually find stimulating (or at least ideologically satisfying). So, I'm stuck with the problem of how to intelligently ration out my dwindling supply of energy so that I can get the most amount of "good" done while I'm still around to affect things (not only that, but, I also have to make my decisions as smartly as I can in the presence of a tech lead that seems to lean very heavily toward inaction, and a community that sometimes makes either the mistake of engaging in far too much wishful thinking given the status quo, or the mistake of encouraging inaction by discussing things to death, instead of just saying: "Yeah, that would be an improvement. +1").CTRL-N > b ENTER > click MESSAGES. The slowest part is loading the messages (with hundreds of pages). Unless you're not signed-in already, but I don't really see a reason for that on my own computer.
Yup. That's the basis of that argument (not being signed-in). Like you, I have no need to read PMs without also being signed-in to Bitcointalk, but, like I said, I'm playing devil's advocate with all three of my arguments against implementing this change.
That may be close enough to what you're suggesting, and it's already implemented (for Newbie-senders only).
Yup. That came up in a private conversation I had a while ago about this. Like most of the diffs I share on the forum, my expectation is that theymos will re-imagine them in terms of his own source tree (as in, I can't see anything besides 1.1.19, so it's often the case that my diffs are "wrong", but, he knows that, and can account for it).
I said it many times, you can guard an ice cream shop with a guy, you can guard a Walmart with two armed bodyguards, but I doubt guarding the White House with 3 guys is a good idea.
If you want to be protected by more Proof of Work, then say it explicitly inside your script:
OP_SIZE <difficulty> OP_LESSTHAN OP_VERIFY
OP_CODESEPARATOR
<pubkey> OP_CHECKSIG
Or even, you can make it signable in a very similar way, as existing P2WPKH outputs are:
OP_SWAP OP_SIZE <difficulty> OP_LESSTHAN OP_VERIFY OP_SWAP
OP_CODESEPARATOR
OP_DUP OP_HASH160 <pubkeyHash> OP_EQUALVERIFY OP_CHECKSIG
And then, it is up to you, how much Proof of Work you want to require in your transactions. Then, even if the main chain is protected by regtest difficulty, double-spending your transaction can be very difficult, and require a lot of Proof of Work.
For example: try double-spending this transaction:
https://mempool.space/tx/8349df0753e80cce322322f1b76789e1d0fd6693aed2f4de4e49576423081ae7See? It is difficult. First, because it is covered by mainchain Proof of Work, and second, because even if it wouldn't be, then you would still need a few days of grinding with GPUs, to even produce an alternative valid version of that transaction, and even move the same coins in regtest.
So, it is up to users, how much Proof of Work they will want to spend in the future, to protect their own coins.
So there is a "migrate" option? you just click "restore", then click "migrate" and you are set?
Yes, you should see the restored wallet's name in Migrate.
But take note that if the wallet contains a lot of private keys that are not derived from an HDkey, it could take a while for the migrate to finish.
And also, your wallet will now start to use new active descriptors for the new addresses that you'll request but it will still retain its old keys.
-snip- And to create the watch-only wallet, what is the workflow compared to Core? I remember reading a tutorial but im not sure now, you have to dumpprivkeys or something, but I don't get it since I only want to export the public keys into the watch-only wallet.
Yes same as Core.
Basically; create a descriptor wallet with "
disable private keys" option, them import your public descriptors (
with xpub) to it via
importdescriptors command.
You can easily test this by installing the software itself and start it in RegTest (
--regtest) and preferably pointed to a temp datadir (
--datadir=<path>).
As for the tutorial, here's one by TraChang:
/index.php?topic=5392824.0 (
for HD descriptor wallets)
If you want it to have all four available script types, you must also import the receiving and change descriptors of the other three script types from the cold-storage wallet.
But for old wallets with "
Just a Bunch of Keys" this will be a tedious task as you'll have to import each public key as single key descriptor,
Example single key descriptors (
should show in listdescriptors after migrate):
pkh(03544894cbe2a7bed80948846d41d46ab37ea9cb437bd2581011108bee120fc67c)pkh(02146dd1c325050cc6869eff1dd88208d222e74c374aad9753bbbf2a8441bd2ed9)pkh(03957f7bd48709d8fdcf326425ef16ff677472bb6f0a0ec96ac263e96b7eb743d3)example import to watch-only wallet:
importdescriptors "[{\"desc\": \"pkh(03544894cbe2a7bed80948846d41d46ab37ea9cb437bd2581011108bee120fc67c)#p6xx8tek\",\"label\": \"Key1\",\"timestamp\": \"now\"},{\"desc\": \"pkh(02146dd1c325050cc6869eff1dd88208d222e74c374aad9753bbbf2a8441bd2ed9)#qxxda7ux\",\"label\": \"Key2\",\"timestamp\": \"now\"},{\"desc\": \"pkh(03957f7bd48709d8fdcf326425ef16ff677472bb6f0a0ec96ac263e96b7eb743d3)#59fu36gk\",\"label\": \"Key3\",\"timestamp\": \"now\"}]"
For each of the descriptors' checksum in the import command above (
e.g.: #p6xx8tek),
I just did the "
lazy method" of putting a placehodler of "
#00000000" and let Core/Knots show me the correct checksum in respective order.
Then I edit those placeholders with the correct checksum.
Lastly, if you're doing it by batch, once you import the last descriptor, replace
\"timestamp\": \"now\" with
\"timestamp\": 0 for it to rescan.
Alternatively, use
rescanblockchain command.
For (3), I hope I'll manage to become a merit source, but even if I won't, I appreciate you taking the time to review my application.
cheers,
apogio
You have done a great job for yourself and for the forum. I must say that your contribution to this forum is highly commendable, I have leant a lot from your post and you have made me to gain more knowledge about the forum.
Considering the level you have attained in the forum and your contribution to Bitcoin talk community, I strongly believe that your application to be a merit source comes at the right time and it's well deserve.