My congratulations to Risto for his predictions.
Ten days ago or two weeks ago he predicted that we would probably be back above 500 by the 15 April.
Thanks. 500 was just flashing when I first read that

I am thinking to show the example of accountable forecasting, as follows (figures example only):
In 30 days, 2014-5-15, USD/BTC (Bitstamp daily vwa) is:
75% = 700
50% = 600
25% = 500
In 90 days, 2014-7-15, USD/BTC (Bitstamp daily vwa) is:
75% = 1800
50% = 1000
25% = 600
After the prediction expires, let's say the first one closes at 550, then we calculate the difference of the price in log scale to the forecasted prices:
75% = 0.105
50% = 0.038
25% = -0.041
Now our common goal is to find a formula that lets us compare different people's price forecasts equitably.
With the midprice, I propose that we use the least-average-error method, ie. take the absolute value of each error over time and average them. (It should be penalized imo if someone is consistently wrong to either side, though, but how to do it without making the system gamable?)
With 25% ja 75% bands it is more tricky. My intention is that it should be a target to give as tight bands as possible, and allow 25% of the forecasts to be in error to the tail side. Perhaps calculate the difference of the 75% and 50% forecasts (in log) and make an average of these, up to the limit of (1-75%) of the forecasts to be erroneous. If the forecast is too conservative, it is reflected in wide bands and thus a worse score. If it is too tight, and more than 25% of the results end up on the tail side, there would be a significant penalty based on the difference of actual prices and the forecasts multiplied by the number of excessive erroneus results.
I have not thought it through, wanted to have feedback!
