Post
Topic
Board Development & Technical Discussion
Re: Are there any benchmark about Bitcoin full node client resource usage?
by
gmaxwell
on 10/05/2021, 14:12:01 UTC
But they are actually "grouped or something"
They aren't the mempool package stuff makes groups, but in blocks transactions aren't grouped in any way, the ordering is arbitrary beyond being topological. 

Quote
and there is no "dependent transaction" in bitcoin literature.
LMGTFY.

Quote
Firstly, it is actually N^2/2 for the worst case.
One doesn't generally write in the constant factors when discussing asymptotic behavior because the true constant factors depend on the exact operations performed.  Usually it's sufficient to just think in terms of the highest exponent.

Quote
Secondly, it is not how we design algorithms for real world problems, being too much concerned about a case that simply doesn't happen
It is when you write systems where reliability is important or when they must resist attack.  Bitcoin shouldn't have any quadratic operations in validation,  even if one is "probably safe most of the time" it just wastes effort analyzing the attack potential.

Quote
Oppositely, it is the current implementation of block verification that imposes an ordering requirement, my algorithm even doesn't enforce parent txns to show up earlier in the block. It has nothing to do with the order of transactions being dependent (your term) or not, once a transaction fails because of its input(s) not being present in the UTXO, it 'll be re-scheduled for a retry as long as at least one more confirmed transaction is added to the set, I don't even look at the txn's index in the block, though, it may be necessary because of consensus, if so, I can keep track of the index, waiting for a confirmed txn with lower index to retry.
You've missed my point. Yes, consensus requires that txn in blocks are in topological order, but that doesn't mean you need to process them that way.  The fact that your suggestion won't actually validate a transaction until it has validated its parents is unnecessary and inefficient, as is its retry and fail approach, instead of picking things up again when it actually can make progress.   Consider what your algorithm would do when all the transactions were dependent on the one before them.  It would process them all sequentially without any parallelism at all.  Meanwhile, in that same case Bitcoin does the majority of the processing entirely in parallel.  One needs to have collected the outputs from the prior transactions to use them later, but one need not have validated any of the prior transactions.