Therefore, due to the uniform distribution of hashes, it is less likely that we will find a prefix close to another one.
A probabilistic software using a previous prefix as a reference point, where you skip millions of keys and where a coincidence or collision is less likely, is not a waste of time as you suggest. Instead, it would be another probabilistic option. In this case, where sequential brute force is exponentially demanding, a probabilistic search suggests a better strategy.
I'm not preaching anything that's not in a 9th graded high school math book where I live.
Did you read the definition of a uniform distribution? It's a random oracle where anything can happen (with an
equal chance of course, which is all I'm bragging about for the last 10 posts here), not something that will ever follow any kind of even-spaced value-to-slot allocations, no matter how you look at it, analyze it, or whatever you want to do with it. The "margin of error" when trying to predict anything about it is simply equal to keeping track of skipped key ranges, which is where everyone that attempts to use this "theory" will end up to, once they find that they either have too many or too few prefixes found.
Thanks for the lesson that 2 bytes can hold 65536 distinct values.