Situation resolved, by the way.
Indeed, this is good to see

The more threads I read like this, the more I'm convinced that a type of down-voting would be beneficial.
The whole point of trust feedback is to have it so that people will look at them and write their own feedback, isn't it? I have always been hesitant to add any sort of regulatory micromanaging when it comes to DefaultTrust and I think adding an option for any arbitrary number of DT members (
who cycle every month) based on another arbitrary metric of merit to
delete feedback is prone to abuse.
In my mind the suggestion would be for acting DT1 to be able to downvote, not "voting-only"
DT1 (those with negative strength). This would at least remove the arbitrary metric of merit to delete feedback, instead requiring the variable of positive DT1 strength as well as inclusions from those with enough merit. I do get your point about micromanagement of DT feedback could be bad, as it wouldn't be a solution to the fundamental problems, but from another perspective the introduction of flags that are voted on do appear to be an improvement. Notably it could lead to less exclusions, if users had the ability to "hide" negative feedback, then there would be less of a need to exclude a user to achieve the same goal.
The main downside I see is that DT1 members could down-vote legitimate DT2 feedback (legitimate to DT2 members at least), which ultimately puts more power into the hands of DT1, which is unnecessary imo. Arguably I think it'd be better if a downvote required 5 DT1 members or 10 DT2 members, or a combination there of, so each DT2 vote counts as half a DT1 vote for example. But I also think this would be a very complex mechanism to introduce, even if the abuse of down-voting could be limited and restricted. Either way, there is current abuse, any positive changes would still be open to abuse (such as trust flags) - just possibly a little less.
Either way, I think any "fair" decentralized system will always be prone to abuse, the question isn't how to avoid this, but how to counter this in a decentralized manner (instead of further centralizing). Systems that aren't prone to abuse are usually very centralized and restrictive, but this is because they are usually fundamentally abusive as a construct.