So would it make sense to try to detect AI-generated content with an AI classification model? The training data for such a model would need to include both an AI-generated dataset and examples of human-generated content. And then train a classification algorithm to separate AI-generated content from human-generated one. Would that be possible?
For the forum? Probably not. Other areas of the internet, likely will implement this, and I imagine even Google will come up with something to combat this sort of stuff on their own platforms. As for the forum, we probably don't see enough of it, and when it does happen if it's so indistinguishable that we haven't noticed it probably isn't that big of a deal. I know that sounds a little weird, but the thing about low quality posts is they tend to derail, and interrupt the reading experience of others, which is problematic. When you can generate posts which aren't doing that, yeah it's a issue, but its not a massive problem if no one notices.
I'm not sure if that's coming across clear enough there, but I do think this will present a problem going forward, but I haven't looked into it enough to figure out how much of a problem that might be. There has to be incentive there for someone to do this, engaging on a forum probably isn't enough. You could argue signature campaigns, but there's a distinctive difference to a high quality poster to someone who's using these tools. Even the examples above, despite being pretty good in terms of staying on topic, they aren't really saying a whole lot. It's rather generic.