Suppose Alice trusts Bob (in a generic sense), Bob trusts Charlie, and Charlie vouches for Dave as being an excellent food critic. Alice wants to eat at Restaurant X which is in fact a terrible restaurant, but was rated 5 out of 5 stars by 3 unvetted user accounts (all three of which are actually the restaurant owner, unbeknownst to anyone). It was rated 1 out of 5 by Dave. Yelp averages these out for a composite score of 4 out of 5 stars. Obviously this is a misleading average score! How do we solve this problem using web of trust?
I would like to see a solution whereby Restaurant X's average is a weighted average, where Dave's 1-star rating is given a lot more weight than the 5-star ratings by the unvetted users, so that Alice sees that Restaurant X's composite score is (for example) 1.2 out of 5 instead of 4 out of 5. If she wants, Alice can do a visual inspection of her web of trust and trace the connection between her and Dave, but this would be very time intensive and it should not be necessary for her to do this for every single rater of every single restaurant under consideration.
Is it envisioned that identifi will enable a solution like this? If it's not, then we need to develop such a vision.
Addendum: I would like to see a solution that would do all of the above, and in addition would be able to hide some of the connections. For example: suppose Bob wants to keep his connection to Charlie private. There should still be a way for Alice to know that her web of trust tells her that Dave is an excellent food critic and Restaurant X is a bad one, WITHOUT revealing the full connection between her and Dave. I believe that this is very much doable (perhaps using zero knowledge proofs, or maybe other cryptographic tools) but that the first step should be to tackle the more simple scenario where all connections and ratings are public. Then and only then can we turn our attention to the privacy-preserving algorithms.