99% of AI users on Bitcointalk will just use it to attempt plagiarism and spam, which is already prohibited in forum rules.
It's going to be difficult to prove the plagiarism, and spam can reach very large volumes. I don't want to have to doubt if each post is genuine. Imagine what happens when shitposter has a bot that earns him signature payments by asking questions on the tech boards.
Also, it is straightforward to detect GPT3 content, there are AI detection sites that do just that.
It's not easy, a lot of manual work, and doesn't provide hard evidence. The first one I tested claimed my post was
81% fake.
In that case, we need admins to explicitly state on the rules that the use of artificial intelligence for the creation of content here is considered plagiarism. Otherwise, someone could argue that there was no connection between the AI and the old rule itself.
Pretending the AI's text was created by you
is the plagiarism. It doesn't matter where the AI got it.
If an AI is able to consistently create content that doesn't break any of the forum's rules (good quality, on-topic, not just a padded word salad, no plagiarism, etc, etc)
I could agree with this, if the post makes it clear it's created by an AI. Without that, if I post something created by an AI, my cat, or a sweatshop filled with elves,
it's not my own creation and should provide a source.