Post
Topic
Board Mining
Re: Potentially faster method for mining on the CPU
by
bill86
on 11/08/2013, 12:11:35 UTC
@botnet
Now I understand your idea better. But you are possibly wrong: The main complexity of your approach is not because of the degree of the polynomials but because of the referencing of the literals (you called them 'variables') to each other.

At this point you have just two choices:
1. Try random input (you know this solution under the (wrong) name 'brute force')
2. Try to approximate to a good solution (but then your result would only be partly correct: Who is interested in a 90% correct nonce to the given block and targeted result?) and it is not proven in this case that an approximation exists.

I think there is some misunderstanding of the technique that I will try and clear up with an example.  It should in fact give an exact solution, not an approximate one.  
What I saw is that your problem is possibly some sort of NP-hard. So I referenced to the only viable solution to get feasible results out of your algorithm: random input or approximation.

In my proposed approach, this is done symbolically.   First, symbolically do the addition and Xor operations to produce a set of equations that represents each of the 8 bits of 'target' in terms of the 8 bits of 'nonce'

t0: (!n0)
t1: (Xor[1, n0, n1])
t2: (Xor[n2, n1 && n0])
t3: (Xor[1, !n3, n2 || (n1 && n0)])
t4: (Xor[n4, n3 || n2 || (n1 && n0)])
t5: (Xor[1, n4 && (n3 || n2 || (n1 && n0)), !n5])
t6: (Xor[1, n6, n5 || (n4 && (n3 || n2 || (n1 && n0)))])
t7: (Xor[n6 && (n5 || (n4 && (n3 || n2 || (n1 && n0)))), !n7])
I do not have the time right now to write a proof. So I feel not fully confident about the hardness of your problem at the moment. But I see at the first glance a starting point to prove that your algorithm could solve the 'Linear Equations Problem' (often referenced as 'LIN'), too.

In this case your algorithm would be at least NP-hard and you are out of luck to find a good exact solution for it.

This kind of algorithms (if your algorithm turns out to be NP-hard) is a little bit creepy: If you have only few (testing) data to solve they feel like some sort of good solution. But if you feed the 'real data' workload to them then they will not terminate in about a couple of years (or sometimes millions of years). Surely they will deliver a result (if correctly planned and programmed, i.e. they terminate) but nobody will ever profit from the output.

A good example is the Travelling Salesperson Problem (TSP) (yeah, I know that TSP is NP-complete and not only NP-hard, but this doesn't make a difference at this point). If you have just five or six cities in your input then it has good solutions and is acceptable fast. But if you give all the cities of a small country to it then it will not terminate within some hundred of years waiting time.

Please keep in mind that there is a tiny chance that I could have overseen anything and your algorithm is in fact not NP-hard. Only a polynomial reduction from LIN to your algorithm (or from SAT to your algorithm) can exclude all possibility of doubt.