Search content
Sort by

Showing 20 of 72 results by pmlyon
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 04/07/2014, 12:36:23 UTC
Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.
Sorry if I wasn't clear— It shouldn't take much time regardless of the size because most of the transactions in the block are already validated and waiting in the mempool. Before the software cached that validation it was taking a fair amount of time, but not anymore. Getting most of the validation off the latency sensitive critical path is a big improvement.

Ahh, I see what you're saying. I'll definitely be able to take advantage of that as well, which will allow everything to run at the same speed as the core spent/unspent UTXO update since all the work it queues up to verify will have already been done. That core piece I can run over 10,000 tps (SSD), and all the pre-verification can be done using multiple, separate disks over which the block data is spanned. I'm trying to extract as much parallelism from the processing as I can, and allow for adding disks to increase your I/O speed, if that becomes an issue.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 04/07/2014, 02:43:19 UTC
What I'm aiming for with the streaming is to try and help reduce latency. So if it takes 10 seconds to download a block and 20 to validate, it should take you 20 seconds total to do both. If it takes you 30 seconds to download and 10 to validate, then it should take you 30 seconds to do both.
Unless something weird has happened it takes ~no time anymore to validate a new block at the tip of the network— its almost all already validated when you receive it.

Yeah, it definitely doesn't take that long at today's block size, I'm just thinking ahead to when they are much larger. I'm trying to come up with a core design that is able to elegantly handle large blocks, if the network gets to a point where large blocks are being used.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 03/07/2014, 19:45:00 UTC
What I'm aiming for with the streaming is to try and help reduce latency. So if it takes 10 seconds to download a block and 20 to validate, it should take you 20 seconds total to do both. If it takes you 30 seconds to download and 10 to validate, then it should take you 30 seconds to do both.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 03/07/2014, 18:46:10 UTC
Block messages for new blocks could consist of just the header, the coinbase txn, and an array of TxIds.
P2Pool already does this for its own blocks.  The problem more generally is that you don't know what other transactions the far end knows (at least not completely) and if you need an extra RTT to fetch the transactions the end result is that the block takes longer to relay than just sending the data in the first place. So in terms of reducing block orphaning and such, it wouldn't be a win.  (It works better in the p2pool case, since the nodes themselves are mining and can be sure to have pre-forwarded any transactions they'll mine)

Perhaps as a general optimization for nodes that don't care about getting the block as soon as possible it might be interesting, though it is possible to optimize latency and bandwidth at the same time through more complicated protocols (see link in gavin's post).

Is this the link you're referring to? https://en.bitcoin.it/wiki/User:Gmaxwell/block_network_coding

I confess that went over my head. Smiley In a scheme such as that, does that allow you to write out the block at all while it's coming down from the network? Or do you need finish downloading it completely before you can start writing out the block? If you can download the block fast enough it doesn't really matter if you can start feed it through validation it before it's finished, but I'm just wondering if that would still be possible with the network coding you described.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 03/07/2014, 17:30:28 UTC
Thanks, I had also wondered about doing something like that.

In the design that I have, it would look something like this, after your step 3:

1) Open a stream to validate the new block
2) Open a stream to write the block to disk
3) Open a stream to download the list of block tx hashes
3a) This could optionally use the idea I have above, except you would proving that chunks of tx hashes make up a block as you grab them. Might be overkill, and you'd want to do more than 512 at a time since just the tx hashes are a lot smaller.
4) Start reading from the tx hash stream and spin off requests for any missing txes, forward the txes to the block writing stream in order as they come in.
5) As txes get streamed into the block, that then gets streamed into the block validator.
6) The double-spend check is the only serial part of the validator as the block streams through it. Looking up previous tx outputs and validating scripts is done completely in parallel, and can be spanned across multiple disks & CPUs. If you've already validated the tx outside of the block, you can just skip it here.
7) Once everything has finished streaming through, and no errors have occurred, commit.

If you use 3a, you can stream into 4 before you've grabbed all the tx hashes. If you don't, you need to get them all and verify the merkle root first to make sure you have the right hash list.
Post
Topic
Board Development & Technical Discussion
Topic OP
Reindex to revalidate?
by
pmlyon
on 03/07/2014, 16:22:38 UTC
Hi, I'd like to compare some numbers from bitcoind to the work that I'm doing. If I run with the -reindex option, will that revalidate all transactions as part of rebuilding the chainstate? I'd like to compare numbers when validating from genesis with the block data already on disk.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 03/07/2014, 13:28:21 UTC
I should expand a bit on what I have in mind.

The idea is that I could send a request to a peer to download a block in a streaming fashion. Every 512 transactions they could include a proof that the previous 512 transactions do in fact make up a piece of the block that I've requested, via the merkle tree.

This allows me to then stream the block through validation, without having to wait for it to be fully downloaded first. As I commit each chunk of the block the validation can kick in and process that chunk.

If another peer asks me for a streaming copy of that same block, I can also start streaming the block out to them before I've finished receiving it myself.

On the receiving end, you wouldn't be doing any more work than you would normally. If a block is invalid, you could potentially find that out sooner than you could currently, before you've had to download the entire thing.

If you start sharing out the block before you've finished it, that would lead to more upstream bandwidth usage if the block does turn out to be invalid. I think mined headers is enough to mitigate that risk.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 01/07/2014, 18:57:30 UTC
Today? Not at all. What I've been thinking about though is if we have something like 1GB blocks. A scheme like this could allow peers to start sharing the block out across the network without having to first wait to download the entire block. I'm trying to think of how we could reduce the latency involved with getting huge blocks out across the entire network.

Oops, only replied to half your comment. Smiley I wasn't thinking of trying to verify that later chunks of the block were valid before you have the earlier parts, just that the chunk did in fact make up a piece of the block, and that it wasn't just random garbage. You'd still have to validate the block itself in order.
Post
Topic
Board Development & Technical Discussion
Re: Chunked download & scalability
by
pmlyon
on 01/07/2014, 16:28:49 UTC
The headers actually take care of that, since they have to be mined either way. You'd have to have a ton of mining power and waste it all to carry out such an attack.
Post
Topic
Board Development & Technical Discussion
Topic OP
Chunked download & scalability
by
pmlyon
on 01/07/2014, 14:22:25 UTC
Hi, I've recently been kicking around the idea of being able to download blocks in chunks after verifying the header and choosing the longest chain.

It seems to me that it should be possible to download the block in chunks of e.g. 512 transactions, and then provide a proof that that is a valid chunk via the merkle root. This would allow blocks to be streamed out across the network without having to wait for the entire block to be uploaded and validated first.

If I'm correct that this could work, the merkle root collision (CVE-2012-2459) is a bit of a nuisance to deal with. Has there been any talk of addressing that bug when we eventually hard fork to allow for a larger block size? I think using a hash of 0 on the right side of the pairing would work, as opposed to duplicating the current hash to get a pair.
Post
Topic
Board Development & Technical Discussion
Re: Cumulative difficulty shouldn't be used to choose the "main" blockchain
by
pmlyon
on 12/10/2013, 15:53:35 UTC
I think the problem is that you are comparing two different timestamps. The first blockchain wouldn't stop mining at the 00:40 mark, so if you continue it to the 01:20 mark it will win as it should.
Post
Topic
Board Development & Technical Discussion
Re: ECDsa Verification Speed
by
pmlyon
on 28/09/2013, 21:51:40 UTC
Thanks everyone! What I have now is functional at least; it's good to know that the performance can be improved down the line.

P.S. Does anyone have much interest in a c# implementation? I started this mostly to educate myself, but I'm making more progress than I was expecting.

Hey pmlyon, I am interested in a C# implementation of the ECDSA based message sign & verify in Bitcoin-QT.. See this thread https://bitcointalk.org/index.php?topic=297097.0. Hhave you had a go at implementing that in your project yet?


Hi, we haven't implemented that in our project yet, but I think this thread may help you: https://bitcointalk.org/index.php?topic=279752.0

Josh has a link there to a managed wrapper he wrote around the sipa secp256k1 verifier:
https://github.com/joshlang/Secp256k1.NET
https://github.com/sipa/secp256k1

I plan on using these when we get to that stage, but haven't looked at them yet.
Post
Topic
Board Development & Technical Discussion
Re: C# Node
by
pmlyon
on 12/09/2013, 23:23:47 UTC
Hey, work is definitely still ongoing. There haven't been any source code updates for a few weeks... I've had two other developers join the project, which I'm super excited about! Smiley

We're all currently busy getting caught up with each other before making any more updates. Going forward we'll have much more up front design work and documentation; the current source code is a prototype.

I'm planning on posting an update to the site over the weekend. Thanks for checking in!

Paul
Post
Topic
Board Development & Technical Discussion
Re: SCIP POW
by
pmlyon
on 29/08/2013, 15:55:24 UTC
...oops, good thing I apologized in advance. Wink Thanks!
Post
Topic
Board Development & Technical Discussion
Re: SCIP POW
by
pmlyon
on 28/08/2013, 19:32:37 UTC
I apologize if this is obvious, but I didn't find it on a search. What does SCIP stand for?
Post
Topic
Board Development & Technical Discussion
Re: How does the Blockchain store so many records without being too big?
by
pmlyon
on 28/08/2013, 14:48:12 UTC
At 0.1kB per transaction and 10 billion people, I get 1TB per day required for each person do be able to do one direct, p2p transaction on the blockchain per day. I'd love to see the core software being written today be able to handle that volume, for when the computers of tomorrow have the raw power to deal with it.
Post
Topic
Board Development & Technical Discussion
Re: .net GUI built in C# posted on reddit ?
by
pmlyon
on 13/08/2013, 13:29:22 UTC
I'm not sure which client you may have seen, but I'm currently (and with some recent help) working on a c# node. It's still only in the prototyping stage but you can see it here: https://github.com/pmlyon/BitSharp/wiki
Post
Topic
Board Development & Technical Discussion
Re: Wiki clarification on headers payload
by
pmlyon
on 02/08/2013, 13:41:16 UTC
Sadly this is one of those cases where it can be easy to get it wrong.  It does seem worth investigating whether there was any breakage or behavior changes since 0.3.x days.  We have changed CBlock to CBlockHeader and it is entirely conceivable that one might have output the "number of transactions" variable, and another did not.  Worth checking.

This may be the case.

Untested thesis, based on code read:  "headers" message format changed when https://github.com/bitcoin/bitcoin/pull/2013 was merged in Nov 2012.

You may have uncovered a protocol-related bitcoind bug.

Update: Incorrect.  I was misreading some code.



Thanks for checking into this! Was I correct to update the block headers table on the wiki to indicate that the value is a var_int that is always 0?
Post
Topic
Board Development & Technical Discussion
Re: Wiki clarification on headers payload
by
pmlyon
on 01/08/2013, 15:12:48 UTC
I've submitted the change to the wiki: https://en.bitcoin.it/wiki/Protocol_specification#Block_Headers

I'm not sure if there's any kind of review process on these edits, so hopefully that's ok. Smiley
Post
Topic
Board Development & Technical Discussion
Re: Wiki clarification on headers payload
by
pmlyon
on 01/08/2013, 13:07:48 UTC
Thanks Jeff. Do you happen to have access to the wiki? I didn't see a place to submit corrections. I was hoping I'd be able to submit something on the discussion page but there's no access to that either. I know it's a minor thing, but I noticed it and figured it'd be worth fixing up. Smiley

You have to make a (0.01BTC) donation to the wiki to get write access.

It is supposed to tell you when you create an account, but apparently, it doesn't work very well.

I think the donation link gives a different address for each person.

That was almost certainly me not reading the create account page very well. Wink I've sent in my donation, my first bitcoin purchase, thanks!