But I find one idea quite compelling to think about, do you think that publicly available artificial intelligence will come to a point where it can be asked for vulnerabilities in the most secure (or rather most pervasively used) algorithms or could it intentionally be fed an information a la garbage-in-garbage-out such that it always provides an answer pleasing the public?
This opens up entire conversations that can't be compressed in just a post, but I'd say that as AI models become more sophisticated, they can be utilized for identifying vulnerabilities in cryptographic domains. I'm not entirely sure about the considerations that need to be taken, but I know that we can't rely exclusively on a creation that holds no responsibility for what it produces. (That under the hypothesis that we can't blame the AI developers for an AI false output)
I don’t fully understand what is been said here but, my closest deduction is that, models to traceability happens to have changed over the years and one hardly finds a need to wish eliminate traces in fiat or banknotes except when it’s sure to be subject to questioning, laundered money and it’s related means to handling money that raises an eyelid.
Even if true, you can't forbid someone that wish preemptively. Everyone is not guilty until proven innocent.