Therefore, due to the uniform distribution of hashes, it is less likely that we will find a prefix close to another one.
A probabilistic software using a previous prefix as a reference point, where you skip millions of keys and where a coincidence or collision is less likely, is not a waste of time as you suggest. Instead, it would be another probabilistic option. In this case, where sequential brute force is exponentially demanding, a probabilistic search suggests a better strategy.
I'm not preaching anything that's not in a 9th grade high school math book where I live.
Did you read the definition of a uniform distribution? It's a random oracle where anything can happen (with an
equal chance of course, which is all I'm bragging about for the last 10 posts here), not something that will ever follow any kind of even-spaced value-to-slot allocations, no matter how you look at it, analyze it, or whatever you want to do with it. The "margin of error" when trying to predict anything about it is simply equal to keeping track of skipped key ranges, which is where everyone that attempts to use this "theory" will end up to, once they find that they either have too many or too few prefixes found.
Just because the probabilities are all equal, does not mean that you will end up with an equal amount of same values over some microscopic sample size (like any 2**66 sample out of comb(2**160, 2**66), or rather, out of (2**160) ** (2**66) possibilities, since any hash has an equal probability to occur).
It is exactly the same principle why, if you throw a coin 100 times, the chances to get a 50/50 is extremely unlikely, even if it has the highest chances. Or isn't a coin flip an uniform distribution of 50/50? Why wouldn't RIPEMD-160 be the same, if you scan some range, and only count how many times the first bit is 1, there's basically 0% chances to have anywhere near the same amount of hashes that start with a 0 (the difference will be a really really big number, not close to zero, and there's a 99.9% confidence for this really big difference to occur). Only when the number of flips (number of hashes) goes to infinity, will you ever have an exact ratio of 50% 0s (heads) and 50% 1s (tails). Up to that point, the difference (in absolute value) will almost certainly increase further and further, perhaps at some times swapping whether the 0s or the 1s start to go ahead. It's all a probability race based on equal chances of possible occurrences.
Thanks for the lesson that 2 bytes can hold 65536 distinct values.
Bitcoin block 756951 has the lowest SHA256 hash to date: 0000000000000000000000005d6f06154c8685146aa7bc3dc9843876c9cefd0f. Generating a hash this low has the same likelihood as flipping a coin 97 times in a row and it landing on tails every time.