I still believe there is some insight to gleem from the observation (conjecture or proven?) that converting deterministic intractable problems into tractable problems makes them nondeterministic and unbounded in entropy, but I am not seeing yet how to frame it in a way that proves anything. Perhaps proving that the conversion must be unbounded would prove P ≠ NP. I need to contemplate that.
On proving P ≠ NP, I do not understand why it can't be proven that the algorithmic
Kolmogorov complexity of the "traveling salesman problem" is at least
O(n2(2n-n-1)), i.e. an exponential lower bound, which is just below the
O(n22n) of
Held-Karp algorithm which is the best known algorithm

My logic is so simple, that I must be missing something.
At best the algorithm can cache so optimal computation of each grouping of cities will only be computed once (i.e. the number of unique times), by
Reed's law we get
2n-n-1 groupings of cities. And for the first leg of the path possibilities,
Metcalf's law tells us there are
n2 possible unique links between cities.
Afaics, it is definitionally impossible to compute at fewer algorithmic steps than the Kolmogorov complexity, because that is the minimum information content of the algorithm, i.e. the entropy.
Even if we applied some heuristic such as grouping cities in clusters, this would not be an absolute assurance of the optimum path. There is no way to get the assurance while reducing the deterministic algorithm complexity below the Kolmogorov complexity.
Since this algorithm is NP-hard and the related
NP-complete problem can be reduced to it, then every problem in NP has the same lower bound. The Independent Set on a planar graph is
listed an exception of NP-complete problems being EXP, but this is for edges of the graph that can't intersect, which obviously the "traveling salesman problem" can't be reduced to without
some exponential blowup remapping because the links between cities can overlap.
All the NP-hard (and NP-complete reduced on them) problems revolve around exhaustively searching an exponential set of unique possibilities for an optimum. The Kolmogorov complexity can't be reduced to polynomial, because there can't exist a short-cut that destroys information and still is optimum. Once the computation is below the threshold of the Kolmogorov complexity, the result can't be deterministically optimum.
Here is a
nearly duplicate proof, but he fails to multiply by
n2.
One of the similar attempts at a proof I've seen is
this one which also attempts to prove a lower bound of the algorithmic complexity. And in his
2014 update, he mentions Kolmogorov complexity as the justification.
Another paper on quick glance appears to try to prove all NP-complete problems are a clustering problem that can't be solved in polynomial time.
This is
very well written.
P.S. I was amazed to
stumble onto
an attempted proof by our very own
cohort MPex.
Edit: A possible reason my idea about doesn't work in a proof, from the researcher who discovered the NP complexity class:
However,
all attempts to find even super-linear lower bounds for unrestricted Boolean circuits
for explicitly given Boolean functions have met with total failure; the best such
lower bound proved so far is about 4n. Razborov and Rudich [27] explain this
failure by pointing out that all methods used so far can be classified as natural
proofs, and natural proofs for general circuit lower bounds are doomed to failure,
assuming a certain complexity-theoretic conjecture asserting that strong pseudorandom
number generators exist. Since such generators have been constructed
assuming the hardness of integer factorization, we can infer the surprising result
that a natural proof for a general circuit lower bound would give rise to a more
efficient factoring algorithm than is currently known.
The failure of complexity theory to prove interesting lower bounds on a general
model of computation is much more pervasive than the failure to prove P 6= NP.
It is consistent with present knowledge that not only could Satisfiability have a
polynomial-time algorithm, it could have a linear time algorithm on a multitape
Turing machine. The same applies for all 21 problems mentioned in Karps original
paper [21].
More on that point:
Now the thought that underlies the work of Razborov and Rudich is one that you have probably had yourself: it seems to be extraordinarily hard to tell the difference between a random Boolean function of low complexity and a purely random Boolean function. In fact, so much does this seem to be the case, that one might even hazard a guess that some theorem along the following lines is true: no polynomial-time algorithm can tell the difference between a random Boolean function of low complexity and a purely random Boolean function.
Except my logic wasn't employing any probabilistic argument as apparently required by natural proofs nor was I stating that NP problems lack some simple feature that P problems have:
[...] consider a commonly envisioned proof strategy for proving P ≠ NP:
Formulate some mathematical notion of "discrepancy" or "scatter" or "variation" of the values of a Boolean function, or of an associated polytope or other structure. [...]
Show by an inductive argument that polynomial-sized circuits can only compute functions of "low" discrepancy. [...]
Then show that SAT, or some other function in NP, has "high" discrepancy.
Our main theorem in Section 4 gives evidence that no proof strategy along these lines can ever succeed.
This is a more comprehensible explanation:
http://www.ams.org/journals/notices/201111/rtx111101586p.pdf