Post
Topic
Board Announcements (Altcoins)
Re: [ANN] [XEL] :: Elastic - The Decentralized Supercomputer ::
by
xxxgoodgirls
on 10/06/2017, 14:47:18 UTC


The very short version:  A job requester would annotate an SSA form program (in a specific machine model resulting in a particularly structured (binary) flow graph)  with a simple liveness/reachability model so that miners could (quickly, and without running any example case inputs) verify the necessary complexity bound of the job's individual work task before selecting.  PoW solutions would operate a little differently (still using "per user" generated inputs incl nonce data, but basically hashing/checking "instruction by instruction" instead of at the end of each input run) so that PoW solution rates become uniform across jobs, being able to be found at any point mid-execution.  (PoW prize pool would probably also need to work a little differently, with any amount of proof-of-work certificates able to be submitted before a bounty is found, and the PoW pool being divided proportionally after.)  Bounty solutions would include an annotation of the original model with information about the eventual I/O relation, such that verifying the output submission can be reduced to an instance of a satisfiability problem.  Nodes (all of them) would validate solutions against this model. (They would still need to "re-run the program" by a symbolic interpretation, but could know that they are doing so in an optimally efficient way - effectively skipping any "unrelated loops" encountered.)  Jobs would always end after one bounty is found.

I'm summarizing a lot, but that is the basic idea.


I'm summarizing a lot, but that is the basic idea.

Even shorter version:  Instead of giving a machine and getting an input/output pair, you give a model of a machine's behavior, and get back an input/output pair's relationship to that machine.  Instead of having a chance to find a PoW certificate at the end of each work attempt (effort of which varies run to run) there is chance to find a PoW certificate for each individual value propagation in the program, which would be uniform both run-to-run and job-to-job.  A job requiring on average twice as many assignments per attempt would generate twice as many work proofs on average for the same number of attempts run.

I posted my last comment before I saw this.  These are great ideas and definitely worth exploring further...

Thank you for the contribute HunterMinerCrafter. I wonder has anyone of the contribuitor explorer further this concept?