Out of his last 50 posts, 32 are showing 100% detection by Sapling AI. No other detector like Copyleaks and GPTzero is showing AI. Posts are long and detailed (IMO, AI). What do you make of it?
IMO, this is a case of humanizing AI content. I mean, is it normal to have such a high amount of false positives by Sapling AI? Have you encountered this before?
There's a couple reasons for this:
1. Sapling will give a result no matter how small the input text is. So it will give a result even if there is only 10 words, which is not a big enough sample size for it to give an accurate result.
2. Sapling detects Google Translate & Grammarly rephrasings as AI-generated, which is correct, because they are. However, using AI in this regard is not as bad as using it to construct a post from the ground up. So this is why its good to use another detector (or two) in concert with Sapling, to rule out the chance that these are (semi) false positives.
I'm not good at English, so I have to communicate with my client by using Google Translate.
Even though I write by myself without using any AI agent,
but I have noticed that most of the time, AI detector tools give false positive results when I check my writings.
This is because Google Translate uses AI. Sapling detects this while the other detectors don't.