This is the sort of paranoia we need more of around here!
In this case, a less trusted forum member (me) was leveraging the trust of someone who was much more trusted. Michael Hendrix met all my requirements for how to choose someone to trust if you must (listed above), except obviously he had no insurance himself. In that forum thread I was telling the people placing bets that they don't need to trust me if they trust him, since he was holding my bond.
We could have been in cahoots, but there wouldn't be any point to doing that. Michael already has a lot of trust - he doesn't need my help to scam people if he decides he wants to do so.
The problem is that trust is a snowball and eventually the amount trusted is going to get bigger and bigger. I'm a cynic so I don't believe that anybody is incorruptible, just a matter of price. Sometimes it's not even the person's intention to defraud anybody, things might happen that he is forced to make a bad decision.
After all, if a crook knows Michael is holding on to say 10K BTC or about US$ 120K (now but hey it might be US$1.2M 2 years later) worth for you, and given his full name and such makes it that the crook knows exactly where Michael leaves, what's to say he won't pay Michael a surprise visit and force Michael to transact that 10K BTC + his own personal stash somewhere else where his partner immediately converts it to cash?
I've been thinking about trying to come up with a system that can be untrusted but seriously at every point, I always find a human being can always fuck it up from outside the technological system. Of course it could be my paranoia coming up with all kinds of "ridiculous scenarios" that won't ever happen in real life. Even then it's only a question of how probable.
So fundamentally the only way to reduce the exposure to close to zero is just a system with low fraud/loss probability P + never trusting it with more than X amount so that P * X is always such a small number that the users won't really feel it even if that system fails.