Seems like every time I turn around there's some issue with default trust.
Maybe all the humans on there should be replaced with versions of that AI that just got released from Swiss police custody, which engage in transactions where they can verify if trust was broken. For example, a PayPal default trust bot sells BTC-equivalent Bitcointalk tokens, ensures the buyer doesn't chargeback for 1 year, positive or neg trust for the risked BTC if they do or don't. A software default trust bot publishes unique coding tasks (that can't be plagiarized) for BTC-equivalent Bitcointalk tokens in various programming languages, then validates the code. Leaves neg trust if code is invalid/errors, removes neg trust if tokens are returned.
Any other ideas?
How would this account for the fact that certain human judgment is necessary when trying to look at if something is a scam or not. The trust system is not a
trade feedback system, but is a
trust system. This would also not account for the fact that it is not possible to for a computer to know for sure if a trade was successful or not (e.g. buying bitcoin for cash in mail, mailing some physical good to a buyer, etc). It would also not account for things like people who are clearly engaging in trades for no reasons other then to get additional feedback (e.g. clear trust farming)