Post
Topic
Board Archival
Re: delete
by
TheFascistMind
on 01/10/2014, 09:18:55 UTC
Size of the solution space isn't the right word. I mean the function of the size changes from exponential (nkO(2n)) to polynomial (nO(1)O(nk)).

If it is possible to convert the computation time (resource cost) from an exponential function of to a polynomial function of the inputs of the algorithm, then the complexity has been reduced from NP to P.

I cited a link which I believe demonstrated this, although I may be mistaken.

The search space for brute force inversion of SHA256 is 2bits, e.g. a 128-bit hash has 2128 possible outputs. All known methods for inverting SHA256 are NP relative to bits, and often cryptanalysis attacks remain in NP and only reduce the exponent by a factor that makes them practical solve for certain n. For example for finding collisions, the birthday attack is 2bits/2. However, some attacks may reduce the complexity to P, e.g. a quantum computer (Shor's algorithm) on RSA reduces integer factorization from sub-exponential to polynomial.

NP requires that the solution can be verified in polynomial time. For example, the verification that the input to a hash produces a certain output.

Please feel free to correct me if I am still wrong.

It is not an error of fact, but only the use of a theory that is not really relevant to the problem.

All you wrote is correct, but, as you note, NP and P (and the O() notation) are meaningful only if the number of bits n is considered variable, and they describe how the cost grows ultimately as n goes to infinity (informally, "just before n reaches infinity").  The theory has nothing to say if one considers a specific n (say, 256), or any n less than some fixed bound, (say, n up to 1 million bits).  In that case, the complexity classes cannot be distinguished: every function with a finite domain can be computed in polynomial time, indeed in O(1) operations.  This is a trivial observation that follows directly from the definitions.

The definition of polynomial time is precisely the time complexity class O(nk).

The relevance is that for NP complexity class, very small increases in n causes exponential (not polynomial) increases in computation cost.

If the best known inversions of SHA256 are NP, then NP is relevant because it means there is some smallish value of n for which no inversion is practical. Afaics, this is what complexity theory gains us.

It is unfortunate that complexity theorists still teach computer science students that their theory has practical relevance, to the point of using the word "efficient" as synonym of "polynomial time".  In fact, that theory is as relevant to software development as the Banach-Tarsky paradox is to manufacturing.

Sir, in year 2000 I paid $30,000 for a week of work to Jeff Stock a former lead developer of Borland C to code the Objects Window in my Cool Page product. He had tested it with a few objects. I loaded it up with 1000 objects and it slowed to molasses. I asked him to fix the problem. After a few days he hadn't fixed it, so I took an hour to study his code and changed one line which reduced the time complexity from O(nk) to O(log n).

Afaics, time complexity is relevant to computer science practice in the real world.