To verify that if we search, for example, 3 prefixes (hex), we can skip the probability for the space of 2 prefixes '256' when we find an 'abc'. Clearly, in the space of the next 256, the probabilities of finding 'abc' are minimal. That's why my method searches in the most probable zones first and then, if necessary, reduces the percentage in the database to continue exploring without retracing steps, but always focusing on the most probable place.
Is it so difficult for AI experts to do this?:import secrets
def generate_hex():
return ''.join(secrets.choice('0123456789abcdef') for _ in range(2))
def calculate_probabilities(attempts):
count_ab = 0
count_abc = 0
for _ in range(attempts):
hex_val = generate_hex()
if "ab" in hex_val:
count_ab += 1
if "abc" in hex_val:
count_abc += 1
probability_ab = count_ab / attempts
probability_abc = count_abc / attempts
return probability_ab, probability_abc
attempts = 256
samples = 1000
sum_probability_ab = 0
sum_probability_abc = 0
for _ in range(samples):
probability_ab, probability_abc = calculate_probabilities(attempts)
sum_probability_ab += probability_ab
sum_probability_abc += probability_abc
average_probability_ab = sum_probability_ab / samples
average_probability_abc = sum_probability_abc / samples
print(f"Average probability of finding 'ab' in {attempts} attempts (based on {samples} samples): {average_probability_ab:.6f}")
print(f"Average probability of finding 'abc' in {attempts} attempts (based on {samples} samples): {average_probability_abc:.6f}")
Instead of using "hypothetical" data, just use real data gathered from 67. It is a ton of data, over 50% of the entire range. Pick a subset and run the numbers. That is the data I used to come up with:
First Run:
- Average difference: 282602011632656.06
- Smallest difference: 194903573833
- Largest difference: 1946984192923367
Second Run (Excluding Smallest and Largest Differences):
- Average difference: 281241799946404.22
Again, can someone do this and find the key, maybe. Could they do this and skip/miss the key, 100%.