It's simply a fact that Python is slow; it's an interpreted language. 10 to 15 minutes may still be acceptable in this case, but what if you need to analyze 1 million private keys? Or do something more complex with fewer keys? The same program in C or Rust can easily run 10-100x faster; so instead of 3 months a program would run for a day. That's quite significant.
Although compiling programs written in Python doesn't make them run faster, you can still optimize them for speed by using in-built functions written in C. Proper memory allocation via usage of generators instead of lists, list comprehensions instead of for loops, avoiding for loops completely, small tricks like multiple assignments can do the job and significantly decrease the runtime of your programs. But it all won't work if you also don't choose a right tool for the job, namely proper data structure or algorithm. For example, you need to determine whether the number 40 is odd or even. You can write a recursive function that will call itself 40 times until it reaches zero and 40 times back to tell you a correct answer. Or you can just use one modulo operation to find a remainder. Even better, employing of bitwise operators may increase the speed of such a determining by dozens of percents.
import time
def is_odd_rec(n):
if n == 0:
return False
else:
return not is_odd_rec(n - 1)
def is_odd_mod(n):
return n % 2 == 1
def is_odd_bit(n):
return n & 1
start_time = time.time()
for i in range(1000000):
is_odd_rec(40)
print(time.time() - start_time)
# output 8.055624008178711
start_time = time.time()
for i in range(1000000):
is_odd_mod(40)
print(time.time() - start_time)
# output 0.24014711380004883
start_time = time.time()
for i in range(1000000):
is_odd_bit(40)
print(time.time() - start_time)
# output 0.23915982246398926