Size of the solution space isn't the right word. I mean the function of the size changes from exponential (
nkO(2n)) to polynomial (
nO(1)O(nk)).
If it is possible to convert the computation time (resource cost) from an exponential function of to a polynomial function of the inputs of the algorithm, then the complexity has been reduced from NP to P.
I cited a link which I believe demonstrated this, although I may be mistaken.
The search space for brute force inversion of SHA256 is
2bits, e.g. a 128-bit hash has
2128 possible outputs. All known methods for inverting SHA256 are NP relative to
bits, and often cryptanalysis attacks remain in NP and only reduce the exponent by a factor that makes them practical solve for certain
n. For example for finding collisions, the
birthday attack is
2bits/2. However, some attacks may reduce the complexity to P, e.g. a quantum computer (Shor's algorithm) on RSA reduces integer factorization from sub-exponential to polynomial.
NP requires that the solution can be verified in polynomial time. For example, the verification that the input to a hash produces a certain output.
Please feel free to correct me if I am still wrong.
It is not an error of fact, but only the use of a theory that is not really relevant to the problem.
All you wrote is correct, but, as you note, NP and P (and the
O() notation) are meaningful only if the number of bits
n is considered variable, and they describe how the cost grows
ultimately as
n goes to infinity (informally, "just before
n reaches infinity"). The theory has nothing to say if one considers a specific
n (say, 256), or any
n less than some fixed bound, (say,
n up to 1 million bits). In that case, the complexity classes cannot be distinguished: every function with a finite domain can be computed in polynomial time, indeed in
O(1) operations. This is a trivial observation that follows directly from the definitions.
.
causes exponential (not polynomial) increases in computation cost.
If the best known inversions of SHA256 are NP, then NP is relevant because it means there is some smallish value of
for which no inversion is practical. Afaics, this is what complexity theory gains us.
Sir, in year 2000 I paid $30,000 for a week of work to Jeff Stock a former lead developer of Borland C to code the Objects Window in my Cool Page product. He had tested it with a few objects. I loaded it up with 1000 objects and it slowed to molasses. I asked him to fix the problem. After a few days he hadn't fixed it, so I took an hour to study his code and changed one line which reduced the time complexity from
.
Afaics, time complexity is relevant to computer science practice in the real world.