Seems like every time I turn around there's some issue with default trust.
Maybe all the humans on there should be replaced with versions of that AI that just got released from Swiss police custody, which engage in transactions where they can verify if trust was broken. For example, a PayPal default trust bot sells BTC-equivalent Bitcointalk tokens, ensures the buyer doesn't chargeback for 1 year, positive or neg trust for the risked BTC if they do or don't. A software default trust bot publishes unique coding tasks (that can't be plagiarized) for BTC-equivalent Bitcointalk tokens in various programming languages, then validates the code. Leaves neg trust if code is invalid/errors, removes neg trust if tokens are returned.
Any other ideas?
How would this account for the fact that
certain human judgment is necessary when trying to look at if something is a scam or not. The trust system is not a trade feedback system, but is a trust system. This would also not account for the fact that it is not possible to for a computer to know for sure if a trade was successful or not (e.g. buying bitcoin for cash in mail, mailing some physical good to a buyer, etc). It would also not account for things like people who are clearly engaging in trades for no reasons other then to get additional feedback (e.g. clear trust farming)
The bolded bit sounds an awful lot like what your "opponents" have been saying; humans lacking both innate judgement and ability to judge even who should and shouldn't be "trust rangers" - as no human is omnipercipient - still get scammed.
I left off those examples because it would have taken me too long at the time to rough sketch the bot code. However, there a number of virtual mailboxes that operate off
http://about.usps.com/forms/ps1583.pdf - a method that could conceivably be used to verify physical item receipt via the bot running a counterfeit check on the scanned bills and object recognition of physical goods shot from the proper perspective. Ultimate disposal of the cash or goods would probably have to be done by the agent in exchange for service credit or conversion to an ACH payment into the bot's account, which I suppose would be used to buy
BTC and/or pay for more goods/gigs, or the bot's own hosting fees.
Trust farming could be screened for by having alternate bot usernames or human investigators retest anyone who gets trusted by the default trust bots.