If there was a "verify it" step, that would take as long as the current initial download, in which it is the indexing, not the data download, that is the bottleneck.
[...]
The speed of initial download is not a reflection of the bulk data transfer rate of the protocol. The gating factor is the indexing while it downloads.
Sorry, these users' disk and CPU were
not at 100%. It is clear the bottleneck is
not the database or indexing, for many users.
The data is mostly hashes and keys and signatures that are uncompressible.
bzip2 gives you 33% compression ratio, saving many megabytes off a download:
[jgarzik@bd data]$ tar cvf /tmp/1.tar blk0001.dat
blk0001.dat
[jgarzik@bd data]$ tar cvf /tmp/2.tar blk*.dat
blk0001.dat
blkindex.dat
[jgarzik@bd data]$ bzip2 -9v /tmp/[12].tar
/tmp/1.tar: 1.523:1, 5.253 bits/byte, 34.34% saved, 55439360 in, 36402074 out.
/tmp/2.tar: 1.512:1, 5.291 bits/byte, 33.86% saved, 103690240 in, 68577642 out.
I wouldn't call 33% "uncompressible"