Oh shit you're serious.
AIs that show enough general intelligence to act autonomously beyond reactive pattern recognition are still far off. It's definitely worth discussing the matter sooner rather than later, because once it happens -- if it happens -- society will have little time to adapt. But until that is the case I doubt that there's any form of artificial intelligence out there with enough agency and understanding to allow for meaningful conversation.
You need to get up to speed. Cleverbot and Chatterbot have almost passed the Turing test. And those are the simple chatbots disclosed and widely available to the public. You would be hard pressed to tell the difference between AI and human interaction. Within a few years most chat customer service will be done by AI. Even Bitmain is focusing on AI with their Sophon initiative.
There already have been machines that passed the Turing test, as far back as 2014:
https://www.theguardian.com/technology/2014/jun/08/super-computer-simulates-13-year-old-boy-passes-turing-testEven ignoring the debate whether the metrics for a successful Turing test are actually meaningful and whether simulating a 13 year old boy who speaks English as a second language is a form of "cheating" or not, it's an impressive feat of course.
But passing the Turing test is still a long way from what we could consider general intelligence.
Merely acting like a human correspondent doesn't make a machine a problem solver. Merely solving problems doesn't give a machine agency. Merely having agency doesn't give a machine insight.
Machines are barely able to act like a human correspondent. They recognize patterns way beyond human capabilities but are not quite there yet in terms of problem solving. And that's just the first few baby steps to what could be considered general intelligence.
I'm not saying that it ain't going to happen. I'm just saying that we still have a long way to go, despite all the hype surrounding ML and neural networks lately.