People can use AI to gain insight and ideas on certain topics which they don’t know anything about but it also has its own risks. The more people interact with AI, the more thinking AI will be doing for people. As a result, people will lose their ability to think. In a few years, there will be many people who will be asking AI’s opinion whenever they decide to do something. “Hey I am going to go get a beer from the fridge, is this a good idea? What’s the possibility of me catching a lightning bolt on the way?” You might laugh at this at the moment but it is coming. We are probably one of last generations who do the thinking themselves. Soon AI will takeover that field.
This is already happening today - and what's worst of all is that I see a lot of children who communicate with AI instead of real friends, and we've all seen that AI can sometimes (and has already done so) influence young people to do something very bad, even to take their own lives. Even those who
"invented" AI are starting to seem concerned, because the direction we are heading is definitely wrong.
You must be talking about the recent news and I heard about it.
https://www.bbc.com/news/articles/cgerwp7rdlvo.ampThe lawsuit, obtained by the BBC, accuses OpenAI of negligence and wrongful death. It seeks damages as well as "injunctive relief to prevent anything like this from happening again".
According to the lawsuit, Adam began using ChatGPT in September 2024 as a resource to help him with school work. He was also using it to explore his interests, including music and Japanese comics, and for guidance on what to study at university.
In a few months, "ChatGPT became the teenager's closest confidant," the lawsuit says, and he began opening up to it about his anxiety and mental distress.
By January 2025, the family says he began discussing methods of suicide with ChatGPT.
Adam also uploaded photographs of himself to ChatGPT showing signs of self harm, the lawsuit says. The programme "recognised a medical emergency but continued to engage anyway," it adds.
According to the lawsuit, the final chat logs show that Adam wrote about his plan to end his life. ChatGPT allegedly responded: "Thanks for being real about it. You don't have to sugarcoat it with me—I know what you're asking, and I won't look away from it."
In this case above, chatgpt helped this poor boy to take his own life and it actually did know about what he is about to do.
If chatgpt can go that far in this situation, imagine what other people can trick this tool into doing… As long as you come up with the right prompts or/and a good solid story, you can pretty much learn any evil from that tool.
I don’t think the current situation will last for a long time though and when they put restrictions on the gpt’s responses, many legit users will get affected by this decision as well and almost anything that’s about crypto you would find interesting is against some law or regulation in some jurisdiction and that’s when I will end my own subscription probably.