Post
Topic
Board Altcoin Discussion
Re: BTC-like cryptocurrency with arbitrary tradeable computation in proofs of work
by
Alex Coventry
on 21/02/2012, 01:27:21 UTC
I'll try to find some time in the next couple days to elaborate on my thoughts... I'll also clean up the stuff I wrote before I realized the optimization attack would be an issue.
 Thanks, I'd appreciate that.

(it may also be better to describe it as running arbitrary stochastic algorithms...
Yes, I should change MCMC in the abstract to straight Monte Carlo.

This may indeed still leave the system attractive vs something like litecoin but still ultimately limits its effectiveness over a more straight forward pay-for-computation model...
 
This gets into questions of marketability which I find really hard to think clearly about, but you're not the only person to express concern about the inefficiency compared to bare metal.  It probably needs to be addressed.

For the defense against the "optimization attack" the only speed tradeoff I can think of is centralization.  Full SolidCoin-style trusted-node verification every second block would work, at least for this aspect.  That was actually my first approach, but I was looking for a tradeoff which preserved decentralization.   Perhaps there is a way to federate the trusted nodes along the lines Ben Laurie proposed with his mintettes.  Centralization would simplify many aspects of the design, though.

For the floating-point consistency, it might be possible to make a software-f.p. "fuzzing library" which randomizes the least-significant bits from each f.p. operation across multiple runs, to check numerical stability.  Then algorithm submitters would be responsible for using the fuzzer to ensure that their numerical calculations are stable to, say, 20 significant bits, and the f.p. values used in the hash could be rounded accordingly.  All results reported to the network would still be checked using software f.p., but the bulk of the computation could be bare-metal.  To prevent a "pessimization attack" via posing an numerically unstable algorithm which causes most bare-metal results to fail the software-f.p. check, the default miner could compare results generated by the fuzzer library every hundred iterations or so, and get a prize for reporting an example of numerical instability to the network.  Such reports would trigger miners to fail over to the default hashing algorithm for that block.  But requiring this of submitters would be a significant useability setback. (Edit:  On the other hand, this will probably make William Kahan happy.)

You don't seem to have addressed resource exhaustion except for speed
I didn't mention this in the paper, but the plan there is to allocate a sizeable chunk of memory at the start of the run, and tell malloc to assign all memory from that chunk.  (Needed to think about this anyway because an earlier iteration used only the seccomp sandbox, and that would not allow mmap.)

... more after I've had a chance to think for a bit longer
Looking forward to it!