Post
Topic
Board Speculation
Re: Gold collapsing. Bitcoin UP.
by
gmaxwell
on 07/07/2015, 00:09:19 UTC
for each block in the Blockchain, which will help answering Q1.  Does anyone know where I can get comprehensive data on the typical node's mempool size versus time to help answer Q2?
No idea, I'm not aware of anything that tracks that-- also what does "typical mean", do you mean stock unmodified Bitcoin Core?

I expect correlation between empty blocks and mempool size-- though not for the reason you were expecting here: Createnewblock takes a long time, easily as much as 100ms,  as it sorts the mempool multiple times-- and no one has bothered optimizing this at all becuase the standard mining software will mine empty blocks while it waits for the new transaction list. So work generated in the first hundred milliseconds or so after a new block will usually be empty. (Of course miners stay on the initial work they got for a much loonger time than 100ms).

This is, however, unrelated to SPV mining-- in that case everything is still verified. As many people have pointed out (even in this thread) the interesting thing here isn't empty blocks, its the mining on an invalid chain.

And before someone runs off with an argument that aspect of the behavior, instead defines some kind of upper limit-- optimizing the mempool behavior would be trivial if anyone cared to, presumably people will care to when the fees they lose are non-negligible.  Beyond elimiating the inefficient copying and such, the simple of expident of running a two stage pool where the block creation is done against a smaller pool that constains only enough transactions for 2 blocks (which is refilled from a bigger one), would eliminate virtually all the cost. Likewise, as I pointed out up-thread incrementing your minfee can make your mempool as small as you like (the data I captured before was at a time when nodes with a default fee policy had 2.5 MB mempools).

First, nice try pretending UTXO is not potentially a memory problem. We've had long debates about this on this thread so you are just being contrary.
Uh. I don't care what the consensus of the "Gold collapsing" thread is, the UTXO set is not stored in memory. It's stored in disk,  it's in the .bitcoin/chainstate directory.  (And as you may note, a full node at initial startup uses much less memory than the current size of the UTXO). Certantly the UTXO size is a major concern for the viability of the system, since it sets a lower bound on the resource requirements (amount of online storage) for a full node... but it is not held in memory and has no risk of running hosts out of ram as you claim.

Quote
Second, my reference to Peters argument above aid nothing about mempool; I was talking  about block verification times. You're obfuscation again.
In your message to me you argued that f2pool was SPV mining becuase "the" mempool was big. I retored that their mempool has nothing to do with it, and besides they can make their mempool as small as they want. You argued that the mempools were the same, I pointed out that they were not. You responded claiming my responses was inconsistent with the points about verification delay; and I then responsed that no-- those comments were about verification delay, not mempool. The two are unrelated.  You seem to have taken as axiomatic that mempool == verification delay, a position which is technically unjustified but supports your preordaned conclusions; then you claim I'm being inconsistent when I simply point out that these things are very different and not generally related.

Quote
Third, unlike SPV mining if 0 tx blocks like now, didn't mean they would do the same without a limit. Perhaps they would pare down block sizes to an efficient level of other larger miners were allowed to clear out the unconfirmed TX set.
I think your phone made your response too short here, I'm not sure where you're going with that.

When you're back on a real computer, I'd also like to hear your response to my thought, that It is "Super weird that you're arguing that the Bitcoin network is overloaded with average of space usage in blocks, while you're calling your system "under utilized" when you're using a similar proportion of your disk and enough of your ram to push you deeply into swap."

Just from knowing a little about database tuning and ram vs. disk-backed memory, I have always wondered if people have make projections about performance of the validation process under different scenarios and whether they can/will become problematic.  One think I've always wondered if it would be possible to structure transactions such that they would load validation processes to heavily on queue, and particularly if it is common case to push more and more data out of the dbcache.  Any thoughts on this that can be quickly conveyed?
Most of the thought has just been of the forum "The utxo set size needs to be kept down" with an emphasis on the minimum resources to run a full node over the long term.  The database itself has n log n behavior, though if the working set is too large the performance falls off--and the fall of is only enormous for non-SSD drives.  Maybe the working set size is owed more attention, but my thinking there is that user tolerance for resource consumption kicks in long before thats a serious issue.

When you talk about "would it be possible" do you mean an attack?  It's possible to construct a contrived block today that takes many minutes to verify, even within the 1MB limit; though a miner that did that would mostly be hurting themselve unless they had some arrangement with most of the hashpower to accept their block.