Wait so what advantages does storing in ram gives exactly, i thought if you store some strings in RAM you can make an instance lookup in them .
Yes i understand what you are saying the best things we can do is either bloom filter or binnig2phase .
Side question, do you have a high end CPU GPU ?
RAM is "instant" (as in, no seek times) only if you know what address to access. There's also the CPU L1/L2/L3 caches, that have even lower latencies. If addresses are closer (like string bytes are) they'll usually end up in the L1 cache of a core, making you believe the speed is so good because data was in RAM, when some data was actually only read once, in bulk, and used word after word via the L1 cache and the CPU registers.
No, you can't store a bloom filter in L1, that shit's just a few hundred KB in size. Store in RAM whatever needs to have fast access (or is read often), save to disk whatever doesn't fit in RAM. Easy, right? That's where the magic is, the implementation. If you're looking to do this to break 135, though, no optimizations will help, you'll still need lots of millions of CPUs, on average. Or maybe just one.
So RAM is only helpful to store a bloom filter or small set of strings .
"Store in RAM whatever needs to have fast access" ,what do you mean by this we can't know what needs fast acces because every point of the db needs fast access ,or you mean the bloom filter