Post
Topic
Board Meta
Re: The J.A.R.V.I.S AutoReply Protocol Initiative (JARPI) - Powered by ChatGPT
by
joker_josue
on 22/04/2023, 22:45:00 UTC
Makes me wonder whether there's such a thing as an "academic score" for measuring the substance of texts. The idea being, the same university-level standards that are applied against plagiarism currently are used to score a post based on several criteria like originality and informativeness. <And this score is posted next to every topic so that there are no accusations of "collusion" and also so that people can realize when they are shitposting.>

And people who don't even manage to pass the score get no bonus (and ChatGPT, lacking any emotion at all, would score a zero and get ejected).

For example, in my campaign there is a bonus that goes to the best posters. Although it is quite a manual process, maybe there is some heuristic some researcher has come up with for this in the last 100 years.

But is there a better way to do this, other than manually?
The best way to assess the quality of a post is by a human. Especially because the concept of quality can vary from person to person, and even if there are guidelines, only a human being will be able to evaluate it in the best way.

This is really not an easy thing, but we have to debate ideas to find ideas/solutions that allow us to reduce this level of spam/poor quality.