There is the
topic, and there are quite a lot of examples of how people catch spammers in GPT chat. Moderators also use their own verification methods; otherwise, how can we think that some reports are good and some remain unprocessed?
Detecting AI is tricky unless there are obvious copy and paste. Write an article using AI, change words where necessary, add a few lines where necessary or even remove a few lines. It become impossible to detect that the source was AI but it still is AI. The only tool we have in detecting AI is to trust our feeling.
When I read posts, if I suspect a user then I start reading his post history, it takes time but after reading a few posts somehow you generate a feeling in yourself to make a decision. Problem with this method is, I can easily be wrong and a member become a victim of my wrong conclusion.
I have to say this one is true. Detecting AI generated posts could be easier to say but when you are in the process, it might be harder than it is, most especially if you simply check his current posts without going deep into his past history. Probably the reason why these AI users are even motivated to continue what they’re doing because it will be harder for them to get caught. And maybe if they get caught, that would mean an extra work for this forum admin. Something that I can really say detecting AI post is really hard and somewhat tricky.