Current predictive models more or less generate texts word for word. Simplistically put, at each point there's a list of weighted candidate words, one of which is stochastically chosen (according to some distribution). The weights can be gently adjusted with a "fingerprint", so that the algorithm balances similar words with a statistically detectable frequency (the "fingerprint"). In texts with more than a thousand words, the probability that this would occur randomly is low (depending on how strongly the "fingerprint" was allowed to alter the weights).
GPT-2 has a loophole! You only need a spin writer on the text that has been generated by Chat GPT or any other AI content bot and it will not be able to detect. You might be only finding those members who do not have any knowledge of the English language. Those lazy buggers who know the language but do not want to write on a topic that they think can be posted here cannot be tracked. I would not rely on a free tool to check AI-generated content. There are some paid ones but it is not worth investing in as long as the content makes sense and does not break any rules of the forum.
Don't forget that several AI systems write in several languages. So, the response has to be very mechanized/generic, presenting information out of context so that we can understand whether it originates from AI or not.
I agree that investing in research tools is very risky and not viable.
I think the best way to find out if the post is made using AI is by asking the user about their response. Ask who explains the same. The way it responds (if it responds) can hint at whether or not an AI is responding.