-snip-
970 436 974 005 023 690 481 970436974005023690481 19YZECXj3SxEZMoUeJ1yiPsw8xANe7M7QR pz 69 or 70 theoretically can recline but sometimes they are repeated 199 976 667 976 342 049
on the other hand, so mix that increasing the "portion" equally finds...
-snip-
What does this mean? I really do not understand you...
it means that if we take randomly 7 parts 970436974005023690481 970 436 974 005 023 690 481 then randomly finds around ~2000000 steps
import random
import time
Nn =['970', '436', '974', '005', '023', '690', '481]
nnn = Nn # *1 *10000000
#print (nnn)
h1 = ("970436974005023690481")
g = ([h1[i:i + 3] for i in range(0, len(h1), 3)])
#print(g)
def func():
DDD = random.choice(nnn)
return DDD
count = 0
while True:
a = int(func()+func()+func()+func()+func()+func()+func())
if a == 970436974005023690481:
count += 1
print(count,"find..............................",a)
time.sleep(60.0)
break
else:
count += 1
#print(a)
pass
pass
if we add more parts the search time grows, for example for 10-11 parts around 30000000 steps (probably the pc speed also affects). 970 436 974 005 023 690 481 + 333 894 200 100
import random
import time
Nn =['970', '436', '974', '005', '023', '690', '481','333', '894', '200', '100']
nnn = Nn # *1 *10000000
#print (nnn)
h1 = ("970436974005023690481")
g = ([h1[i:i + 3] for i in range(0, len(h1), 3)])
#print(g)
def func():
DDD = random.choice(nnn)
return DDD
count = 0
while True:
a = int(func()+func()+func()+func()+func()+func()+func())
if a == 970436974005023690481:
count += 1
print(count,"find..............................",a)
time.sleep(60.0)
break
else:
count += 1
#print(a)
pass
pass
if duplicate these 10-11 parts to mix up to 10000000, sometimes it is in the region of 10000000 steps, apparently within the margin of error (not significant impact).
The question is how to divide for enumeration all the space from 000 to 999, 1000:11= 90 pieces to iterate over 30000000 steps
can it help?
What random factors affect search?
1) Classic factor 50%: on average we will find prvkey by sorting around 50% of the range.
(example, its stat print vanity-gen, where search vanity address)
2) The classical theory of probability, as the average distribution.
Divide each range into N parts.
We calculate in which part each known prvkey is located.
Parts with less prvkeys have a higher probability of finding a prvkey in the next puzzle.
In the theory of games, when the casino guarantees that all possible options will fall out approximately equal to the number of times.
+50% prob
3) The classical theory of probability, as in time.
Divide each range into N parts.
We calculate in which part each known prvkey is located.
The lowest probability will be in the part in which prvkey was found the previous time.
In game theory, this is known as the "double up game".
+50%prob
and how many steps will be required if there are 20 parts 1000:20=50 pieces to iterate etc...