I've been thinking for a while now that all this anti-AI sentiment basically boils down to that: that people are too worried about losing their weekly payment because of AI.
I think it is much simpler: people are worried, that there will be no human-written texts anymore. Now, if you want to talk with AI, then you can do it directly, and be sure, that no humans are involved. And if you want to talk with humans, then you can visit some forum, and be quite sure, that some particular members never used AI to write their posts, and are not planning to post AI-generated content as their own in the future. However, if AI usage will be allowed and promoted, then soon, you will have no humans to talk to. The Internet will be dead, and full of bots, willing to reply 24/7.
it's feels like you are reading garbage despite it's grammar and sentence perfection
Not only that: the answer often seems to be correct, but is completely wrong. When I asked AI about
Schoof–Elkies–Atkin algorithm, then first replies were more or less correct. But when I tried to get some real examples, and step-by-step calculations, then all of them were completely wrong, no matter how deep AI tried to analyze it.
Also, it is not that surprising, that some things are unsolvable by AI: if you have hundreds of humans, that wrote on forums in the past, that "this topic is hard", then you will get that kind of response from AI, instead of getting anything useful. Because the model was trained on some training data. And if there were a lot of texts like "just google it", or "this is too complicated", then this is exactly what you will get from AI, because it was trained just on that, and it cannot invent something on its own, if it was absent in the training data.
Another thing is that many AI models are currently much worse, than they were in 2022 or 2023. And the reason behind it is quite simple: in the past, we had a lot of humans, talking with AI bots, so AI was progressing very fast. But then, when people started feeding AI bots with AI-generated content, then we reached bots, talking with bots, and the quality dropped significantly.
By the way: many times it turned out, that by just creating a network, which pretends to use AI, and involving real humans instead, you can achieve much better answers in some cases. I wonder, how many experiments like that were performed during early stages of AI boom: because sometimes, some replies were really good, and looked like they were written by real humans. And now, for exactly the same prompts, no AI models can produce content with such quality anymore.