Eureka! It is this simple:
- Every predictor gives two prices in log scale eg. "In 2014-5-16 the price is between 2.7 and 2.85 (roughly 500 and 700)"
- When the actual price is known, you take
min [ abs ( actual - upper_limit); abs ( actual - lower_limit) ]- Whoever has the lowest average error after a reasonable number of predictions (predictions can be renewed as often as you wish regardless of their maturity)
is the best! 
- Proof omitted

I would be very grateful if you could explain this to a simpleton like myself.
Whoever was the closest to the actual price with the narrowest range was the best. I think he's being a little facetious here, because this is, of course, obvious.
Except that it doesn't actually work.
Proof by counterexample: Imagine a forecast range of 50-100. If the outcome if 95, i.e. within the range, the formula produces a score of 5. However, if the outcome is 105, i.e. outside the range, the formula produces a score of 5. But clearly, the first situation should score better, but with this formula it does not! QED?
Edit: I can think of more examples where it doesn't work too, can I leave those as an exercise to the reader?
Edit 2: For those wondering how to do it properly, I suggest searching the meteorology literature - it's much more comprehensive on this issue than the financial/economic/econometric literature.
Suppose rpietila's formula was amended to yield zero in the case that the prediction is within the range. Then your offered counterexample fails. Did you have others that would prove the amended scoring formula invalid?
Note, to anyone interested, that the reduction of the price to log10 form allows the predictions to be compared on widely differing timescales, in which price values might be 10x larger or smaller. rpietila has been talking about deviation from the log10 trendline in terms of these log10 deltas.