Post
Topic
Board Development & Technical Discussion
Re: Some 'technical commentary' about Core code esp. hardware utilisation
by
Troll Buster
on 06/07/2017, 23:35:17 UTC
Quote
And many people on the project quit because they didn't like working with you, what's your point?

Name one.

How about the entire XT team for starters:

Quote
https://bulldozer00.com/2016/01/19/ready-and-waiting/

Because of the steadfast stubbornness of the Bitcoin Core software development team to refuse to raise the protocol’s maximum block size in a timely fashion, three software forks are ready and waiting in the wings to save the day if the Bitcoin network starts to become overly unreliable due to continuous growth in usage.

So, why would the Bitcoin Core team drag its feet for 2+ years on solving the seemingly simple maximum block issue? Perhaps it’s because some key members of the development team, most notably Greg Maxwell, are paid by a commercial company whose mission is to profit from championing side-chain technology: Blockstream.com. Greg Maxwell is a Blockstream co-founder.


Says the few days old account...

Right, if the logic doesn't work, just fall back to using registration date and post counts to establish authority.

Like the guy above you who claimed to have "30 years experience" while demonstrating less knowledge about CPU and compilers than a snot nosed newbie drone programmer.

Reading failure on your part. The blocks are not in a database. Doing so would be very bad for performance.  The chainstate is not meaningfully compressible beyond key sharing (and if it were, who would care, it's 2GBish).

At the time I didn't even know you guys were stupid enough to not compress the 150G of blocks, until someone reminded me in that thread. Seriously what is the point leaving blocks from 2009 uncompressed? SSD is cheap these days but not that cheap.

If you care about how much space the blocks are using, turn on pruning and you'll save 140GB.

So after all the talk about your l33t porn codec skills, your solution to save space is to just prune the blocks? LOL. You might as well say "Just run a thin wallet".

Why do you think compression experts around the world invented algorithms like Lz4? Why do you think it's part of ZFS? Because it is fast enough and it works, it is simple proven tech used by millions of low power NAS around the world for years.

There is a PR for that, it was something like a 5% performance difference for initial sync at the time; it would be somewhat more now due to other optimizations. It's used in the fibre codebase without autodetection. Please feel free to finish up the autodetection for it.

I would have made patches a long time ago if the whole project wasn't already rotten to the core.




I see you just added this part:

LZ4 is a really inefficient way to compress blocks-- it mostly just exploits repeated pubkeys from address reuse Sad   the compact serilization we have better (28% reduction) but it's not clear if its worth the slowdown, especially since you can just prune and save a lot more.

Here, there are over 100 compression algorithms, all invented and benchmarked for you.
You'll easily find one that has a size/speed/mem profile that just happen to work great on bitcoin block files and is better than LZ4.

Just pick ONE.

Quote
http://mattmahoney.net/dc/text.html
Large Text Compression Benchmark

Program           Options                       enwik8    
-------           -------                     ----------  
cmix v13                                      15,323,969  
durilca'kingsize  -m13000 -o40 -t2            16,209,167  
paq8pxd_v18       -s15                        16,345,626  
paq8hp12any       -8                          16,230,028  
drt|emma 0.1.22                               16,679,420  
zpaq 6.42         -m s10.0.5fmax6             17,855,729  
drt|lpaq9m        9                           17,964,751  
mcm 0.83          -x11                        18,233,295  
nanozip 0.09a     -cc -m32g -p1 -t1 -nm       18,594,163  
cmv 00.01.01      -m2,3,0x03ed7dfb            18,122,372