Here's the problem: Google is now using the same type of LLM for Google Translate that something like ChatGPT uses, which means its going to get flagged as AI-generated.
What better way to explain the situation than by asking ChatGPT itself:
AI detection tools like ZeroGPT often classify text produced by Google Translate as AI-generated because of how both systems process language:
1. Google Translate Uses AI (NMT)
Google Translate's Neural Machine Translation (NMT) system generates text using deep learning models — the same kind of technology used by generative AI like ChatGPT. Since it's literally AI-generated text, detection tools can pick up on that.
2. Text Patterns Resemble AI-Generated Output
AI-generated text tends to have certain statistical features:
- Fluent and grammatically consistent structure
- Even distribution of vocabulary
- Low randomness or burstiness
Google Translate often produces uniform, polished, and pattern-consistent sentences, similar to those produced by large language models, which can trigger AI detectors.
3. Lack of Human-Like Variation
Human-written text often includes:
- Idiomatic expressions
- Minor grammatical inconsistencies
- Personal style or emotion
Translated text from Google Translate can feel overly clean or synthetic, making it more likely to be flagged — even if the original text was written by a human.
So basically Google Translate is taking the gist of what a user is trying to express and then repackaging it in the same AI format used by the hundreds of other posts reported in this thread.
It would be considered a conundrum except for the fact that ultimately it still comes down to whether or not a post is considered "spam" by moderators. If a post is not saying anything meaningful and that has already been said a thousand times before, I'd say it should still be deleted as spam.