In the academia sphere, authors should reference ChatGPT, or similar LLM generated materials appropriately. Not doing so would be a disaster because they're highly sensitive materials that should be reviewed by a qualified academic, and they should be proven to be correct.
It's crazy how it is even acceptable for LLMs to be used to construct posts in any shape or form. LLMs are incredibly biased, because that is the way that it is designed. It should have no place in forums where we value the voices of actual entities who are able to think for themselves.