...
I get what you mean now, and it's understandable. I do have a few queries regarding it though:
1. Your method only seems to come into effect after the fact, and could miss out some problematic scenarios. For example:
- A new member makes a thread offering to sell $200 gift cards for $50. The thread links to another website where the gift card is for sale, and locks the topic. This user hasn't broken any laws and hasn't necessarily scammed anyone yet, and so there is no solid evidence of any wrongdoing despite several red flags being present.
- A legendary account that has just had it's password and email address changed puts out a loan request for $200, citing it's self (the account) as collateral. The user has been inactive for years before this, and the type of language they use has completely changed. Again, they haven't broken any rules and haven't necessarily scammed anyone, despite there being several red flags present that the account has been hacked.
Do you plan for there to be a kind of natural selection kind of deal happening, where people have to look out for the signs themselves before interacting with others? Since, as nice as this would be, it could land some people in a shit spot should they not recognize the signs of a scam.
2. Your idea for how to moderate this kind of behavior works in theory, but in practice would always need a staff member to babysit it to ensure it's working properly in their
subjective opinion. For example, if there is a gang present in the trust system as you seem to believe, would this gang not just work with each other to mitigate any potential punishment for leaving invalid ratings?