Seems like every time I turn around there's some issue with default trust.
Maybe all the humans on there should be replaced with versions of that AI that just got released from Swiss police custody, which engage in transactions where they can verify if trust was broken. For example, a PayPal default trust bot sells BTC-equivalent Bitcointalk tokens, ensures the buyer doesn't chargeback for 1 year, positive or neg trust for the risked BTC if they do or don't. A software default trust bot publishes unique coding tasks (that can't be plagiarized) for BTC-equivalent Bitcointalk tokens in various programming languages, then validates the code. Leaves neg trust if code is invalid/errors, removes neg trust if tokens are returned.
Any other ideas?
How would this account for the fact that certain human judgment is necessary when trying to look at if something is a scam or not. The trust system is not a
trade feedback system, but is a
trust system. This would also not account for the fact that it is not possible to for a computer to know for sure if a trade was successful or not (e.g. buying bitcoin for cash in mail, mailing some physical good to a buyer, etc). It would also not account for things like people who are clearly engaging in trades for no reasons other then to get additional feedback (e.g. clear trust farming)
I left off those examples because it would have taken me too long at the time to rough sketch the bot code. However, there a number of virtual mailboxes that operate off
http://about.usps.com/forms/ps1583.pdf - a method that could conceivably be used to verify physical item receipt via the bot running a counterfeit check on the scanned bills and object recognition of physical goods shot from the proper perspective. Ultimate disposal of the cash or goods would probably have to be done by the agent in exchange for service credit or conversion to an ACH payment into the bot's account, which I suppose would be used to buy
BTC and/or pay for more goods/gigs, or the bot's own hosting fees.
Trust farming could be screened for by having alternate bot usernames or human investigators retest anyone who gets trusted by the default trust bots.
feedback system. Although it would not do anything to prevent scams from happening in the first place. For example if a user is offering a too good to be true offer then how is an unexperienced user suppose to know that it is probably a scam? I know that tomatocage often screens people's actual willingness to accept escrow with newbie accounts and flags them with his own account when they show signs of not accepting escrow (e.g. they stop responding). What about people who make statements that they are desperate for money then all of a sudden start to sell something very expensive? What about face-to-face deals and other deals not done within the forum?