I guess the lesson here is that <Space x Time> is always a constant!
....
Faster = Larger
Slower = Smaller
---
Now image you find a interval of 2^20 keys where a certain property occurs with probability 1/4 while the same property occurs with probability 1/2 in the rest of the space (example: points with coordinates X lower than 0.5 * 2^256)
i.e. a set where a certain property occurs less frequently than the expected value.
----
By the time you're doubling up on the amount of work (time) just to halve the required space, you're already far behind a different approach, which uses the same amount of time (work), but a constant amount of required space...
In my previous example I split 2^40 = 2^20 * 2^20
but you can split the same interval in this way: 2^40 = 2^22 (space) * 2^18 (time)
In my idea you do more work at precomputing the keys, but store fewer keys (2^22 / 2^2 = 2^20 keys stored) if I find a certain property
then I need to generate only 2^18 *2 = 2^19 keys (time) to retrieve the private key.
In this way I got : 2^20 (space) x 2^19 (time) = 2^39 (the constant is lower than 2^40)
You have to do more work to generate the DB, but then you need to do fewer calculations.
I'm not searching to break #135 key, I'm trying to optimize the search in small interval (just for fun, to understand better how it works).