basically we should try a different approach. I am starting with the idea of converting the private keys to binary, as per the author's comment. It is easy to see that the keys all start with 101,100,110,111 and the substrings inside are 00, 11, 000, 111, 0000,11111,... we can reduce the search space quite a bit here, we can use more markov chains to generate high probability strings first. Generating keys and applying keyhunt will help speed things up a lot.
All numbers are binary, there's nothing to convert there, since that's what they already are. And every bit at every position has a 50% chance of being either 0 or 1, because all keys have an equal chance of success, and there are 2**70 keys times 70 bits each, and they all go from 0...0 to 1...1. Out of all of them, half of the bits are 0, and half are 1.
There is no bias, except maybe for human eyes, which has no impact anyway. Applying data compression to strings of bits doesn't make some keys or bit substrings more or less likely, since this would mean that somehow the universe gives more priority to some combinations, instead of each of them having equal chances.