No, it would require much more than 2^80 work. Or you would require 2^80 work + a storage capable of containing a hash table of 2^80 * (256 bit + 80 bit) = 336 * 2^80 bit = 2^88.4 bit = more than 2^38 PB.
The current max size of my hash table is now 2^28 * (64 bit + 32 bit) = 96 * 2^28 bit = 2^34.58 bit = 24 GB (to store 2^28 keys in ram). It is only 1/2^54 of 2^88.4!
It is not possible to get such amount of ram in the next 40 years.
Could you go a bit more into
how you get the numbers in the parentheses? You lost me there.
Look at this code:
https://gist.github.com/jhoenicke/2e39b3c6c49b1d7b216b8626197e4b89You need to store a list of 2^80 public keys in a hash table. For the sake of simplicity we suppose we have enough ram.
Then:
#define GSTEP (1<<80)
typedef struct hashtable_entry {
uint256_t x;
uint81_t exponent;
} hashtable_entry;
#define HASH_SIZE (2*GSTEP)
hashtable_entry table[HASH_SIZE];
each entry has the x coordinate (256 bit) of a public key + a exponent (a key to access faster to the entry). The exponent must be longer than the size of the list (to minimize collisions, see
https://en.wikipedia.org/wiki/Hash_table), then if the list has 2^80 elements, it takes at least 81 bit for the exponent (HASH_SIZE is 2*GSTEP = 2^81). -->
(81 + 256 bit)
For the #57 key instead:
#define GSTEP (1<<28)
typedef struct hashtable_entry {
uint64_t x;
uint32_t exponent;
} hashtable_entry;
#define HASH_SIZE (2*GSTEP)
hashtable_entry table[HASH_SIZE];
I use 32 bit for the exponent (32 > 28) and I store only the first 64 bit of the x coordinate (there is a low chance to have a partial collision in a list of 2^28 element, i.e. two different x with the same first 64 bit) -->
(64 + 32 bit) To avoid any collisions you should use always 256 bit for the x coordinate. And the size of the hash table should be at least two times the size of the list you want to store.