i can see what you are saying, so as ltc was the asnwer to BTC becoming too high in difficulty due to ASICS we gotta wait and see what the answer to LTC is against the coming ASICS., if there is a such a coin that stands a chance and pools can adapt to it that would be the new standard...
An easy way of making ASICs unprofitable is to design algorithms that require large memory buffers and that have performance bound by memory bandwidth rather than arithmetic. ASICs provide the greatest benefits for algorithms that are arithmetic-bound, and they provide the least benefits for algorithms that are bound by memory bandwidth. By combining a large size memory buffer with random access patterns, we would get a level playing field that evolves very slowly. GPUs of today have 200-300GB/s memory bandwidth which has only increased by a small margin generation-to-generation. GPUs are expected to get a nice jump in bandwidth when memory technologies like die-stacked memory show up in a few years, but after that bandwidth growth will be very very slow again. A large part of the complexity and cost in a GPU is the memory system, and this is something that is only feasible to build because millions of GPUs are sold per week. By developing an algorithm that requires a hardware capability that is only cost-feasible in commodity devices that are manufactured in quantities of several million or more, it would push ASICs completely out, and keep them for a very long time, perhaps indefinitely. It's one thing to fab an ASIC chip, it's another thing to couple it to a high-capacity high-bandwidth memory system. If you design an algorithm that uses the "memory wall" as a fundamental feature, it will make ASICs no better than any other hardware approach.