This is the part most intuitively ignore and therefore fail:
-Probability of finding "a": 1/16
-Probability of finding "ab": 1/256
-Probability of finding two "ab" in 256 attempts: approximately 0.18
As you can see, if you find "ab" in an early shot, the probability of finding another "ab" in the next 256 attempts is very low. This would allow you to skip those subsequent attempts without losing significant precision, since although each attempt is still an independent event, the chances of it being there are very low.
Here's your fallacy (which goes in repeat mode with you):
You didn't take into account that the probability to not find "ab" at all in 256 attempts is 36%.
You didn't take into account that the probability to find "ab" more than once is not "very low", but at a good 28% (100 - 36 (for 0) - 36 (for 1)). Because you never added the probabilities for it to appear 3 times, 4 times, 5 times, and so on (up to 256 times). You only base some claims on the fact that it's very unlikely to appear "a second time", but
it can also appear more than 2 times, not necessarily
just one more time.
So while in principle you may be on to something (though nothing more than simply
observing the normal behavior of a uniform random variable), your calculations are off quite a bit, to the point that the combined probabilities of
failure once you go skipping over and over again, accumulate, and they accumulate in the fashion that you like really much: compounded.
I already explained, the math backs me up, you can try to refute it but anyone who reads and knows the subject will understand what I mean. Neither you nor I can alter that. If the probabilities of finding 2 are low, what importance do the probabilities of finding 3 or more have if they decrease exponentially? And I care even less about the probabilities of finding nothing, as I am omitting that unlikely space.
Regarding how I correct the margin of error, I already explained. In a very unlikely case (very, very, very unlikely) that it misses the target, my script simply recalculates the database, omitting a smaller percentage until it finds the target. Consequently, in the worst-case scenario, which is extreme (omitting 0%, target next to another identical prefix), I will find the target 100% of the time. My script just prioritizes the most probable moves. So I don't understand your battle for ego.
Every time you try to be right, you fail more.
I am only defending my idea: I don't know what Bibilgin does, and it could be something entirely different. Since I don't know how he does his calculations, I can't comment on whether it's right or wrong, but I give him the benefit of the doubt because he hasn't disclosed his method.