Search content
Sort by

Showing 20 of 50 results by npuath
Post
Topic
Board Beginners & Help
Topic OP
test
by
npuath
on 02/03/2023, 22:50:47 UTC
test
Post
Topic
Board Development & Technical Discussion
Re: BitCrack - A tool for brute-forcing private keys
by
npuath
on 08/02/2023, 10:57:00 UTC
But for the example above:  2^69 / 55,246,870 = 10,684,692,370,060; now take that and multiply by 1.72GB = 18,377,670,876,503GB, double all numbers for 2^70. That's a lot of GBs Smiley
Right, it's 18 ZB, roughly the same as 24 ZB (using 20 bytes per address, i.e. just HASH160, no prefix, checksum or private key).
In other words, about twice as much as all the world's storage capacity (HDD, flash, tape, optical) in 2023, and would cost something like $1000 billion1.


1 Using data from IDC and their "Worldwide Global StorageSphere" metric (not to be confused with the "DataSphere", which is the amount created and some 10x bigger).


Post
Topic
Board Development & Technical Discussion
Merits 1 from 1 user
Re: BitCrack - A tool for brute-forcing private keys
by
npuath
on 07/02/2023, 23:18:31 UTC
⭐ Merited by ETFbitcoin (1)
It is a random dataset of around 2^70 addresses
Wait until April 1, then by all means "put them in a .txt file".
Post
Topic
Board Development & Technical Discussion
Re: Why difference in 6 blocks is enough to think the transaction is secure?
by
npuath
on 07/02/2023, 21:10:57 UTC
As the average difficulty goes up, pools tend to run mining software that have a predictable reorg policy, in order to minimize the probability that their own blocks get invalidated. That is why we don't see large reorgs these past few years.
Could you elaborate on this? What is a reorg policy, and are there unpredictable variants?
I would have guessed that the decline in reorg frequency and depth is the result of lower miner inter-latency.


Post
Topic
Board Development & Technical Discussion
Re: BitCrack - A tool for brute-forcing private keys
by
npuath
on 07/02/2023, 19:55:58 UTC
Does anyone know how to best search for multiple addresses at once?
As is often the case, the problem as stated is massively underspecified.

For instance:
- Are your target addresses are correlated, perhaps even strictly sequential (in which case the solution is trivial)?
- How large is the dataset; is it feasible to transfer it to several kernels for parallellisation?
- Is the dataset mutable, or is it feasible to perform an initial, heavy transformation (in which case perfect hashing might outperform probabilistic/Bloom)?

Et cetera. A general database engine is almost certainly not optimal in either case.

Post
Topic
Board Speculation
Re: gigamegablocks
by
npuath
on 07/02/2023, 13:47:52 UTC
The "other kind" of reorgs, where blocks get orphaned because two or more blocks are mined almost at the same time, has as far as I can tell not occurred since 2019.
That link is very out of date. The most recent fork I am aware of is at block 772,981, which was around 2 weeks ago. There was a competing block with the hash 0000000000000000000682990a0dae862b48e0451d619938215dd47ed9560200 mined by Foundry. Usually we see on average around one such event a month.
Indeed it is. I got it from a fellow forum user in another thread. It looks live, but it hasn't been updated since 2020.

I think BSV has a max block size of 128 MB
It's actually 4 GB.
Right you are again, I lazily relied on english WP.

With gigabyte block sizes, I understand and agree completely.

Appearantly this topic was created accidently, and it's title isn't really matching the first posts. My interpretation of the OP has been that, as the total chain size (not individual block sizes) grows ever larger, prohibitive hard- and netware costs may cause a (further, potentially massive) decline of full nodes.

With current bitcoin block sizes, I still think hybrid pruning (as per my posts above) might be part of a solution to that potential problem.

Post
Topic
Board Speculation
Merits 3 from 2 users
Re: gigamegablocks
by
npuath
on 07/02/2023, 11:23:45 UTC
⭐ Merited by vapourminer (2) ,Welsh (1)
Why is anyone even still talking about this?
In my case, because with a gigantic chain size, decentralisation is at risk (see above regarding costs etc).

The price of transactions will no longer be a cheap $1 or whatever but that's okay because doing on-chain transactions will be sort of like wiring money is today, something you don't do very often. On-chain will be just for large transactions where the tx fee is entirely negligible, or moving money around, while everything else will be done on Lightning or possibly other later developed L2's where you pay like a penny for a transaction. So it'll be alright for users.
This scares me. What you're describing is mitigation by centralisation, forced trust and forced custody. Unless you're proposing that every user should run her own L2 node, in which case we're back where we started (prohibitive costs).

And for miners it'll be fine because by then miners will almost entirely be using either essentially free stranded energy (startup cost but then extremely low maintenance cost for renewable energy) or they will be mining companies who are tied directly into power plants (which is already being done today) to strengthen the energy infrastructure of society while also getting super cheap energy for mining.
Unfortunately (in my view), this separation between "users" and "miners" already is a fact. The original reason to use proof of work instead of simply counting IP addresses (to determine majorities) was to mitigate the fact that certain actors are able to allocate huge numbers of addresses. The scenario you're describing is magnitudes worse but not at all unlikely, I'm afraid. It's not really the topic of this thread though (but very related, I agree).

Post
Topic
Board Speculation
Merits 6 from 2 users
Re: gigamegablocks
by
npuath
on 07/02/2023, 00:15:31 UTC
⭐ Merited by o_e_l_e_o (4) ,vapourminer (2)
They are not that rare ... a chain reorg is a fork.
Sure, the terms are fuzzy. In this case, I contrasted forks with local chain reorgs. YMMV, but I know of only 6 global and sustained forks (excluding XT, BTH, Gold etc): 2010 (overflow bug), 2013 (db migration fix), 2015 (DER encoding), 2017 (Segwit), 2018 (double spending fix), 2021 (Taproot). Few and big enough that I would advocate revalidation, pruned node or not.

The "other kind" of reorgs, where blocks get orphaned because two or more blocks are mined almost at the same time, has as far as I can tell not occurred since 2019. It used to be much more frequent and might increase again (if f.x. inter-miner latency went up again), but with reasonable pruning parameters, this shouldn't affect a pruned node specifically, as far as I can tell; as soon as a new block is mined (from either chain), the tie is broken (in all likelyhood).

This changes if we adopt blocks which are gigabytes in size. The scam coin BSV, for example, experiences frequent chain splits, some over 100 blocks long.
I take it you don't share the Satoshi Vision Wink I think BSV has a max block size of 128 MB, and if we change BCT rules to allow gigabyte blocks, my current viewpoints may indeed be invalidated. I still fail, however, to grasp why a larger block size would cause orphans more frequently, let alone in chains of more than a hundred blocks at a time - how could a larger block size stop nodes from receiving any and all blocks from a competing chain in more than a thousand minutes? Especially since the competing chain actually worked harder for 625+ coins (if not, no split and no issue)?

And verifying from scratch every time when their blockchain is over 8 TB in size is no small task if your node is pruned and you have to redownload from scratch.
Precisely! That a gigantic blockchain is no small task is the point of this thread Smiley  I'm suggesting that maybe pruned nodes, in combination with cheap, slow, non-transactional storage of pruned-away data, might help alleviating part of the consequences.

Post
Topic
Board Speculation
Re: gigamegablocks
by
npuath
on 05/02/2023, 05:51:32 UTC
This sentence is not written by Molière (if it was, I'm sure that the Administrator, in his wisdom, would sense its significance and leave it alone).
Post
Topic
Board Speculation
Re: gigamegablocks
by
npuath
on 04/02/2023, 13:34:55 UTC
When nodes perform the initial block download and verification, they start at that genesis block and work towards the chain tip, which is the most recent block.
Got you. For a full verification, this is likely the most efficient way, and also how my own implementation works. After that, new blocks are verified up towards the root until a previously verified branch is reached.

... pruned nodes can be vulnerable in the scenario of a chain split ...
Instinctively, I would say that chain splits present no prune-specific vulnerabilities; even in the extreme case that the split is so far towards the root that it can't be reached from a pruned node's partial branches, there's no risk of false verification. In any case, chain splits are so rare and contentious that it would be prudent to revalidate from scratch after each one, pruned node or not.

... as blocks become exponentially larger, then chain splits tend to become both become more frequent and longer ...
Perhaps you mean local chain reorganisations, not actual forks? A node may indeed discover a new, longer chain, orphaning previous blocks, but I don't see how this is prune-specific (given reasonable pruning parameters).

... You still need a full node if you are going to run a wallet ...
I'm sure you don't really mean this  Smiley

... or indeed view any transactions prior to your most recent block. Having only a handful of full nodes responsible for fetching data on all but the most recent UTXOs is a big risk.
Yes, I agree on both. My hopeful guess is that a large enough share of transaction needs could be fulfilled by cheap, pruned nodes (with all blocks from, say, the last 6 months), and that the rest of the data might be stored (locally, hence trusted, or non-locally if partner trust can be established) in a cheap, slow, non-transactional manner.

Post
Topic
Board Meta
Re: [212 weeks] [Updated Feb 4 LoyceV's Trust list viewer - Create your own!
by
npuath
on 04/02/2023, 12:20:18 UTC
What's DT1 and DT2 pray tell?
Post
Topic
Board Speculation
Merits 7 from 2 users
Re: gigamegablocks
by
npuath
on 04/02/2023, 11:57:05 UTC
⭐ Merited by o_e_l_e_o (4) ,vapourminer (3)
... you can prune, but that doesn't stop you from having to download them all and verify them all in order to reach the chain tip ...
In my vocabulary, that's the root not the tip (I mention this since nodes regularly use the tip, but seldom the root). Nomenclature aside, you are truistically right in that one would have to download the full tree in order to locally verify it. And that's important; don't trust, verify.

For one-time verification purposes, there's no technical need to hold the full tree locally at the same time; each branch can be verified separately.

Also, once the validity of the full tree is established at a local node, little is gained from full revalidation. This is the principle which makes pruning possible in the first place.

Of course, the full tree still needs to exist, preferably at as many locations as possible. But not necessarly in an expensive (i.e. fast, locally complete and randomly accessible) form.


Post
Topic
Board Bitcoin Discussion
Re: Praxis for PoS handling of chain reorganisation?
by
npuath
on 02/02/2023, 13:51:33 UTC
Good overview, thanks.
Post
Topic
Board Bitcoin Discussion
Re: Praxis for PoS handling of chain reorganisation?
by
npuath
on 02/02/2023, 10:46:10 UTC
Thanks for the tip link. Right now I'm mostly concerned with mined blocks (but no longer part of the longest chain).

In the end, transactions do not just disappear.

I wish we could be absolutely certain of this, then we wouldn't ever require more than 1 confirmation, or even PoW at all  Smiley

For small amounts, 1 confirmation seems to be commonly accepted.

I realise that it's uncommon, thanks to your link and some more research I can confirm that the number of "extinct" blocks dropped from several per year to only 2 in 2019, and actually none since then.

Unfortunately I can't build my system based on hope alone. How do existing payment systems handle the situation when a transaction in a previously confirmed block actually disappears (perhaps because of a malicious actor trying to double spend)?

Post
Topic
Board Bitcoin Discussion
Re: Praxis for PoS handling of chain reorganisation?
by
npuath
on 02/02/2023, 01:02:14 UTC
Thanks! For both the link and the sentiment.
Am I reading the link right, or have I smoked too much?
Not a single block rejected for four (4) years?
Post
Topic
Board Bitcoin Discussion
Re: Praxis for PoS handling of chain reorganisation?
by
npuath
on 01/02/2023, 13:18:56 UTC
bump
Post
Topic
Board Meta
Re: Use this BBCode to insert historical BTC quote in your post
by
npuath
on 01/02/2023, 12:20:45 UTC
For everyone's information: use "&" to combine multiple queries like color and date at once.

Right, I'll update the OP to be clearer on that, thanks!


Post
Topic
Board Meta
Merits 2 from 1 user
Re: Use this BBCode to insert historical BTC quote in your post
by
npuath
on 01/02/2023, 10:30:18 UTC
⭐ Merited by vapourminer (2)
... Can you retrieve prices with a shorter time history for no date query (eg closing price in the last 1 hour). ...

Sure, [img]https://ztt.se/btc:quote[/img] now yields the real-time XBT average (disregarding forum image proxy caches).


Post
Topic
Board Meta
Re: BTC value at msg date
by
npuath
on 01/02/2023, 10:23:36 UTC
Good idea Smiley Take a look at this thread...I am not sure if it is just me though around 4/5 of the posts are not showing the price. ...

Fixed, posts with date Today now also get a price (the XBX average from https://www.coindesk.com).


Post
Topic
Board Speculation
Re: gigamegablocks
by
npuath
on 31/01/2023, 23:41:33 UTC
I hope there aren't any rules against necroposting (I'm still confused over why that's reviled).

Isn't pruning a partial solution?