Now this is exactly the kind of thing you could use an AI for. They're neutral, don't get into trust disputes with anyone and never apply to join signature campaigns because they don't know what the hell that is.
But the fact that they are neutral, does not have the human capacity to understand everything that is involved in any post. So you can sometimes unfairly rate something negatively that it isn't, or rate something positively when you shouldn't.
It is true that the bot can improve these details, but humans have infinite probabilities, so something can look like one thing to an AI and be something totally different, which only human discernment can do.
I'm not against AI being able to help our day to day, but I think that it generates content on the forum, it distorts the purpose of the forum.
Another detail:
Here we are discussing a "good" bot that was intended to automatically answer repeated questions.
But, what would stop the emergence of "evil" bots that will create content simulating that it is human?
We have to be careful not to set precedents that then make everything more difficult to control.