Post
Topic
Board Meta
Re: Trust System Upgrade
by
TECSHARE
on 01/06/2019, 15:26:02 UTC
...
I get what you mean now, and it's understandable. I do have a few queries regarding it though:

1. Your method only seems to come into effect after the fact, and could miss out some problematic scenarios. For example:
  • A new member makes a thread offering to sell $200 gift cards for $50. The thread links to another website where the gift card is for sale, and locks the topic. This user hasn't broken any laws and hasn't necessarily scammed anyone yet, and so there is no solid evidence of any wrongdoing despite several red flags being present.
  • A legendary account that has just had it's password and email address changed puts out a loan request for $200, citing it's self (the account) as collateral. The user has been inactive for years before this, and the type of language they use has completely changed. Again, they haven't broken any rules and haven't necessarily scammed anyone, despite there being several red flags present that the account has been hacked.
Do you plan for there to be a kind of natural selection kind of deal happening, where people have to look out for the signs themselves before interacting with others? Since, as nice as this would be, it could land some people in a shit spot should they not recognize the signs of a scam.

2. Your idea for how to moderate this kind of behavior works in theory, but in practice would always need a staff member to babysit it to ensure it's working properly in their subjective opinion. For example, if there is a gang present in the trust system as you seem to believe, would this gang not just work with each other to mitigate any potential punishment for leaving invalid ratings?

1. This could be addressed first of all by making a thread in the "reputation" subforum to discuss the issue and alert others to it. This thread could then be referenced with a neutral rating. This has the additional benefit of training new users to read trust ratings instead of just looking for little red and green flashy bits and moving on. We are already operating under "a kind of natural selection deal." People who are not doing due diligence can not be protected from themselves, and their behavior currently under the existing model already gets them into a "shit spot". Even if by happenstance they get protected once or twice, the cause of the issue is still not being resolved, resulting in the inevitability of their being robbed.

2. This seems to be the constant refrain, that more staff will be needed to do this when in actuality no additional staff intervention would be needed, in fact probably even LESS would be needed than the current system. This theoretical gang would with every retaliatory or baseless rating be subjecting themselves to public scrutiny as there is a standard of evidence and a simple form of due process (requirement of the objective standard). As it is currently, no one has any accountability for their ratings or exclusions, it is just a matter of "I believe XYZ" and I am not even going to bother explaining myself. This objective metric makes this giant loophole for abuse MUCH smaller, and again redirects accountability back to those making the accusation if it is seen to be lacking. They may very well gang up to continue abuse, but now everyone will see exactly what they are doing and it will be MUCH more difficult for them to justify their actions as opposed to the current system we have now where no one is obligated to explain any of these choices or "beliefs".



What is your point even? If they are smart enough to erase their trail completely then you aren't catching them anyway. Again, what evidence is available would be submitted for public review. Neutral ratings could serve as warnings with the thread referenced. I agree "untrustworthy behavior is incredibly subjective", which is why I am arguing for an OBJECTIVE standard of evidence before negative rating. Account sellers could again be neutral rated with a reference thread. This is not covering it up. This is defining a very objective line so that the community can focus on fraud, not absolutely everything being perceived as subjectively "untrustworthy". The negative trust factory farmers would just have to find a way to get their dopamine hits elsewhere rather than from dropping negative ratings on people assembly line style and fellating themselves over the moderate amount of authority it gives them.

My point is that everyone that uses the internet is used to reading through bias and noise, restricting noise is going to reduce real valuable ratings. The current feedback system already works with public review. Most claims have reference links to a thread in question where a user can read more about the issue themselves. Poor feedback isn't insignificant either, it helps people follow patterns of who they should and shouldn't trust from feedback senders. The proposal here is to change the capability for tens of thousands of users, due to the actions of a half dozen people. They have their side of the story, and everyone else has theirs. Public review can take care of whether their feedback is significant or not.

Setting up rules so that people can't be jackasses just means that they'll find other ways to be jackasses. Let everyone put their worst self forward, and its not an issue.  

Noise is by definition invalid. To restrict noise would not restrict valid ratings. Now you might want to argue there are things people should be aware of that would not fall under the objective metric of a standard evidence of theft, violation of contractual agreement, or violation of applicable laws, which is a fair point, but like other users you are inherently excluding the bad results of subjective ratings while demanding we only address the good. Anything that does not fall firmly within the objective metric can be handled with a thread in the "Reputation", "Market Discussion" or "Scam Accusation" subforums, along with a neutral rating referencing the thread. Saying that using a system that is wide open to abuse is good because it allows us to see who is abusing it is an asinine argument. Nothing would prevent these kind of judgements based on several other factors. So you are saying you would rather have the cluster fuck of constant infighting we have now because it shows who is abusive, as opposed to an objective metric that cuts out the vast majority of this bullshit before it even starts?