@NotATether: I'm starting to see your point about letting an AI classify shitposts. But, I don't frequent super spammy boards, so most of my issue is actually
sig spam on good boards. The best way to combat sig spam is to have campaign managers stop paying for it. A system like what I'm proposing could achieve that without them having to change
anything about how they run their campaigns, the posts would just automatically stop being paid for (because the managers would never see them). In my estimation, "AI" can often just be replaced with the phrase "black box", and I don't like the idea of affecting people's earnings with something that isn't accountable, makes decisions unilaterally, and frequently gets things wrong.
I am also in general against any proposal which brings us closer to Reddit's "mob rule" upvote/downvote type system, where unpopular but factually accurate posts are often hidden from view.
Yep, very much agree! But,
surely you can appreciate how what I've described so far has the
makings of a workable system?
Are you really so confident that
no combination of parameters will end up working?
Don't you think some kind of trusted "post curator" list (or something similar) is worth exploring? (Please don't take the shortcut of immediately asking: "Okay, but who decides who's a curator and who isn't?".)
Or, what if the community could see a log (either combined or separated) of each user's activity with regard to this system, so everything was done out in the open, and negative trust could be used as a tool to combat abusers?