However, the Corallo Relay Network does support a sort of compression. Rather than transmitting all the transactions in a solved blocks, since most the other miners know about them already, it just transmits indices that refer to each transaction (sort of like a map for how the TXs fit in the block).
I think a more appropriate term for this would be encoding - using codes to represent larger blocks of information that is already known by all parties. This is usually way more effective than blind data compression.
if minner could communicate 100MB blocks with 250KB of encoded data, this is what will allow bitcoin to scale don't you think.
my guess is they wouldn't even need to send the full 64byte TX IDs only a 4 byte hash of the TX ID should be enoght for other minner to identify the TX's included in the new blocks.
using this method a miner could communicate a block with 250000 TX's!!! ( >400TPS ) with only 1MB of data

What novel idea, maybe you should apply to become a core developer?
Here:
The relay network includes an optimized transmission protocol which enables sending the "entire" block typically in just a smal number of bytes (much smaller than the summaries you suggest, which still leave the participants needing to send the block).
E.g. block 000ce90846 was 999950 bytes and the relay network protocol sent it using at most 4906 bytes.
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-August/009942.htmlright this method is already being used by
some most miners, but this isn't standard way to propagate blocks.
should it be?
is this the key to scaling bitcoin?
I think yes on both counts.
right now this method is netting miners >200X coding gain!
Corrected.
This doesn't solve any of the centralization or attack vectors concerns btw
why?
bigger blocks will no longer mean centralization, if a 1GB block and can be sent using 4MBs!

Did you forget about the nodes? 1 GB blocks ?

Seriously, this stuff is just beyond your understanding, just stop it.

node can do the same damn thing....
1GB is pushing the limits i agree
but its totally reasonable to expect a typical home computer to be able to stream 100MB of TX within 10mins.
at 1MBps you can stream 600MB of data every 10mins
miners/full nodes would simply be expected to be able to keep up with the TPS happening on the network.
and another optimization would be that miners don't include TX's that are likely not to have fully propagate through the network yet.
wtf is wrong with that?