Post
Topic
Board Development & Technical Discussion
Re: Double hashing: less entropy?
by
someone42
on 11/06/2012, 16:35:08 UTC
No, because the assumptions I made become less true the more rounds are done (maybe they're not even accurate enough after one round). The set of all possible images of SHA256^N becomes smaller for larger N until it converges to a fixed set (which is probably very large). Then SHA-256 is a permutation (one-to-one mapping) on this set. (This is true for every function from a space to itself).

I thought it would be interesting to see what the entropy reduction is for multiple rounds. I assumed each round has its own independent random oracle which maps k * N elements to N potential output elements, where 0 <= k <= 1 and N is 2 ^ 256. For each round, I found that on average, exp(-k) * N output elements have no preimage. Therefore, each round maps k * N elements to (1 - exp(-k)) * N elements.

Iterating this, the entropy reduction (ignoring the non-uniform output distribution for now) is:
RoundCumulative entropy reduction
10.6617
21.0938
41.6800
82.4032
163.2306
324.1285
645.0704
1286.0381
2567.0204

I don't observe any convergence, and indeed the equation k = 1 - exp(-k) has one solution: at k = 0. But this is probably because I assumed that each round had its own independent random oracle. The results may be different for a fixed function like SHA-256.