Is it really necessary? How much is he gonna save, especially since he still rounds to bytes and doesn't use compression?
If we use compression, I recommend it be optional for a client.
This scheme was used by an experimental compiler for the Oberon OS in the '90s for encoding constants in abstract symbol tree based object files called slim-binaries. They used other techniques too in order to keep object file sizes down, and nearly all of what they did isn't relevant to what we're doing but on average they saw a dramatic decrease in size of their object files compared to other schemes.
This is from the article
Slim Binaries by Franz and Kistler in Communications of the ACM v40 i12, Dec. 1997:

Granted, most of that savings was due to an adaptive LZW-style encoding on the elements of an AST so we won't see such savings. I recommend the variable length encoding of integer constants not for any compression it may provide as that's just an incidental bonus we get for free. I recommend the encoding for the following reasons:
1. For portability of representation of constants between architectures.
2. Low working memory requirements.
3. Fast to encode and decode.
4. Simple to implement.
5. The code for the encoding and decoding of integers is very small -- only a hand-full of bytes.
And messing with bits is even worse than dealing with endianness.
The scheme is actually pretty simple. The example code Franz gives for encoding integers doesn't do any explicit bit-twiddling: there purposefully aren't such operations in the Oberon family of languages in order to guarantee portability. (It's a foreign way of thinking for C-accustomed programmers.)