I don't think blocks are easily compressed. The data is rather random since hashes are random and in order to preserve all of the data so it can be verified, it must be losslessly compressed. However, since there isn't that much of a pattern to the data, it becomes rather difficult to losslessly compress blocks with a good compression ratio.
This would be true if the receiver (another node or miner) didn't already know a lot about the contents of the block already. However, since most nodes have fairly similar mempools, and since block are built from transactions held in a miners mempool, this redundancy can be exploited to achieve compression. Examples of this type of compression include the Relay Network, the IBLT proposal, or Danny's Merkle Tree idea (which I like, BTW).