1. This could be addressed first of all by making a thread in the "reputation" subforum to discuss the issue and alert others to it. This thread could then be referenced with a neutral rating. This has the additional benefit of training new users to read trust ratings instead of just looking for little red and green flashy bits and moving on. We are already operating under "a kind of natural selection deal." People who are not doing due diligence can not be protected from themselves, and their behavior currently under the existing model already gets them into a "shit spot". Even if by happenstance they get protected once or twice, the cause of the issue is still not being resolved, resulting in the inevitability of their being robbed.
Alright, but the issue with neutral ratings is in the name. They have no effect, and are practically invisible unless you specifically go looking for them. I understand (and agree, to an extent) with your point about users doing due diligence before trading, but you're assuming every user that would see the problematic post will:
1. Be logged in, so able to see the trust system. My example included that the potential scammer was not trading on the forum, but instead linking off the forum for their sales. Potential victims have no reason to even have an account here, as they could just as easily stumble upon the thread through other means.
2. Know what the trust system is, and know what to look for. As much as you may dislike the 'red and green flashy bits', it's one of the only sure-fire ways to get people to actually notice something. If the potential victim doesn't know what the trust system is, let alone what a neutral rating is, your plan to deal with these scenarios is useless for them. You could argue that someone shouldn't be trading if they don't understand the trust system, but given the massive variety in users here (in age, proficiency of language etc) that is somewhat unfair.
Whether you agree with the way the current trust system is used or not, an invisible-until-looked-for rating on a potential scammer is going to do diddly squat to help a good portion of potential victims.
2. This seems to be the constant refrain, that more staff will be needed to do this when in actuality no additional staff intervention would be needed, in fact probably even LESS would be needed than the current system. This theoretical gang would with every retaliatory or baseless rating be subjecting themselves to public scrutiny as there is a standard of evidence and a simple form of due process (requirement of the objective standard). As it is currently, no one has any accountability for their ratings or exclusions, it is just a matter of "I believe XYZ" and I am not even going to bother explaining myself. This objective metric makes this giant loophole for abuse MUCH smaller, and again redirects accountability back to those making the accusation if it is seen to be lacking. They may very well gang up to continue abuse, but now everyone will see exactly what they are doing and it will be MUCH more difficult for them to justify their actions as opposed to the current system we have now where no one is obligated to explain any of these choices or "beliefs".
The only staff intervention (as in use of staff powers) that happens within the trust system currently is deletion of spam. I can't imagine that would change.
I understand that you want accountability, but what exactly would that change? You say yourself that they may just continue to 'gang up', but now everyone can see what they're doing. Without intervention or forced removal of ratings by staff, which wouldn't work anyway, how exactly would this system be any different to the one we have currently? If there is always a big spooky mafia that neg rates everyone that disagrees with them, what difference would accountability make?