Post
Topic
Board Bitcoin Discussion
Re: John Nash created bitcoin
by
iamnotback
on 11/04/2017, 10:52:04 UTC
Well, this is the kind of cryptographic "common sense" that doesn't make sense.  As I said before, one has to assume, in a cryptographic design, that the cryptographic primitives are about at the security level that is known - for the simple reason that one cannot predict the deterioration of its security level by future cryptanalysis.  As far as one goes, it can be total.

You're either not comprehending or being disingenuous. Do you not know that Shor's algorithm is known to theoretically break ECC but not hash functions? I had already told you as follows, that the hash has greater security for expectations of quantum computing. It is a prudent precaution.

Another reason (in addition to the compression of UTXO) to hash the values on the block chain is because when the use of a quantum computer is detected, we have some protection against chaos and can map out a strategy for burning the values to a new design securely. Hashes are much more likely to be quantum computing resistant.

If you argue that it doesn't matter if we have the hashes when ECC is broken by quantum computing, because the transactions can be intercepted and cracked by the attacker before they are confirmed in the network, you would not be thinking clearly. Because quantum computing would at its inception (nascent stages) likely be only able to break long-term security but not short-term. So there would be a period to transition as I already stated in the above quote from my prior post.

Please be more thoughtful, because I am losing precious time where I am confident you are smart enough that you could have figured this out if you'd remove your highly irrational confirmation biases.

Let us take a very simplistic example to illustrate what I mean (it is simplistic, for illustrative purposes, don't think I'm cretin like that Smiley ).  Suppose that we take as a group, the addition group modulo a prime number, and that we don't know that multiplication forms a field with it.  We could have the "discrete log" problem in this group, where "adding together n times" the generator g, a random number between 1 and  p-1, is the "hard problem to solve", exactly as we do in an elliptic group.  Suppose that we take p a 2048 bit number.  Now THAT's a big group, isn't it ?  Alas, the Euclidean division algorithm solves my "discrete logarithm" problem as fast as I can compute the signature !

2048, 4096, 10^9 bit key, it doesn't matter: the difficulty of cracking goes polynomially with the difficulty of using it ! (Here, even linearly!).

So the day that one finds the "Euclidean division" in an ECC, it is COMPLETELY BROKEN.  The time it takes a user to calculate his signature, is the time it takes about, for an attacker to calculate the secret key from the public key.  As such, the ECC has become a simple MAC, and it doesn't even last 3 seconds once the key is broadcast.

You are describing future cryptanalysis breakage of the math theoretic security of the intractability of the discrete logarithm over certain fields.

But you're analogy does not apply, because Shor's algorithm (a form of cryptanalysis) is already known! It is not a future unknown.

Also (and this is a minor point which isn't really necessary for my slamdunk) you are conflating the breakage of discrete logarithm math theoretic security with the security of permutation algorithms of hash functions. I repeat the distinction between the two which you have failed to assimilate:

You are uninformed. Crypt-analysis breaks on hash functions typically lower the security in bits, but don't lower it to 0 bits.

As I had originally pointed out you are conflating two entirely different systems of security and each can benefit orthogonally from increased bit lengths when we are not concerned about an intractable brute force enumeration attack and instead concerned with math theoretic cryptanalysis breakage.

Thus...

--> if we assume that ECC will be broken one day, bitcoin's crypto scheme is IN ANY CASE not usable.  This is why the "common sense" in cryptography, of "protecting primitive crypto building blocks because we cannot assume they are secure" is just as much a no-go as the other common sense of security by obscurity.  It sounds logical, but it is a fallacy.  You think ECC will be broken, don't use it.  And if you use it, accept its security level as of today.  Because you cannot foresee HOW HARD it will be broken, and if it is totally broken, you are using, well, broken crypto.

Not only are you failing to assimilate the fact that Shor's breakage is already known (not a future thing not knowable as you are arguing) which is sufficient slamdunk on why you are incorrect, but you are also claiming that hash functions can typically be entirely broken in one swoop which afaik not the case (and I studied the cryptanalysis history on past SHA submissions from round 1 to final rounds).

Now, what is the reason we can allow LOWER security for the exposed public key, than for the long-term address in an output ?  The reason is a priori (and I also fell into that trap - as I told you before, my reason for these discussions is only to improve my proper understanding and here it helped) that the public key needs only to secure the thing between broadcasting and inclusion in the chain.  But as you point out, that can take longer if blocks are full than 10 minutes.  This can be a matter of hours.

Now, if we are on a security requirement of days or weeks, then there's essentially not much difference between days or weeks, and centuries.  The factor between them is 10000 or so.  That's 16 bits.  A scheme that is secure for days or weeks, only needs 16 bits of extra security, to be secure for centuries ====>  there is no reason to nitpick on 16 bits if we are talking about 128 bits or so.
There is no reason to introduce "short term security" if this is only 16 bits less than the long term security level.

You have incorrect conceptualization. The point of long-term security is not the difference in the time it takes to crack with a given level of technology, but rather that over the long-term we can't know when that moment comes that cracking has become sufficiently fast enough. The Bitcoin UTXO from 8 years ago that Satoshi has not spent, could have been under attack for the past 8 years. By having the hash for the long-term security, then we force all attacks to begin only when the UTXO are spent. This enables us to restrict damage to a very few number of transactions and the community will become alarmed and take corrective action.

Yes you are learning from me (and sometimes I learn from you also). I was quite busy the past 4 years learning this stuff.

But I truly hope we can wrap this up, because I have some more important work I want to do today than debating with you, unless you're sure you have an important point to make. But please try hard to think because I have thought deeply about all this stuff already. You are not likely to find a mistake in my past analysis unless it is math based above the level of math I'm knowledgeable.

You are not accurately accounting for the savings in portion of UTXO that must be stored in DRAM (for performance) versus what can be put on SSDs. Without that in DRAM, then the propagation time for blocks would be horrendous and the orphan rate would skyrocket (because nodes can't propagate block solutions until they re-validate all transactions due to the anonymity of who produced the PoW).

Of course not.  You don't have to keep all UTXO in DRAM of course.  You can do much smarter database lookup tables.  If the idea is that a node has to keep all UTXO in RAM, then bitcoin will be dead soon.

Satoshi just nailed you to the cross.  Tongue

Nope, Gavin Andresen is talking bullshit and confuses cryptographic hashes and lookup table hashes.

http://qntra.net/2015/05/gavin-backs-off-blocksize-scapegoats-memory-utxo-set/

If you need to keep a LOOKUP HASH of UTXO, then that doesn't need cryptographic security.  There's no point in having 160 bit hashes if you can only keep a few GB of them in memory !  160 bit lookup hashes means you expect of the order of 2^160 UTXO to be ordered.  Now try to fit 2^160 things in a few GB of RAM Wink

You only need about a 48 bit hash of the UTXO to keep a database in RAM.  That doesn't need to be cryptographically secure.  Completely crazy to keep 160 bit hashes as LOOKUP HASHES in a database hash table !   And there are smarter ways to design lookup tables in databases than keeping a long hash table in RAM, ask Google Smiley

Of course I am knowledgeable about Shannon information content (entropy) and also efficiency of various sparse space lookup algorithms (in fact I did considerable research on this in 2015 which is why I am prepared to refute you now), the problem is DDoS resistance. The collision resistance on 48-bits is too low (especially relate that to equations for a sparse space lookup algorithm and the relationship between how full it is and the probability of collisions) so the attack can DDoS the network with invalid address, which force you to go off to SSD storage and bring your system down to a crawl.

And don't give me some BS about using PoW to rate limit, because I already refuted @Gmaxwell when he offered that to me as a solution. Just remember that PoW is a vulnerability to botnets at the nascent stage and then later it is a winner-take-all power vacuum. Satoshi had thought out all of these issues.

I'm not even putting this on the back of Satoshi.  I claim he made sufficient errors for him not to be a math genius but he is a smart guy nevertheless.  I can criticise him because of hindsight, I'm absolutely not claiming to be at his level.  But I claim that he's not of the type of math genius as a guy like Nash.  This is the kind of argument I'm trying to build.  

But SUCH stupid errors, I don't even think Satoshi is capable of.  It is Gavin Andreesen who is talking bullshit to politically limit block size.  If ever it is true that RAM limits the amount of UTXO in a hard way, then bitcoin is dead from the start.  But it isn't.

I am sorry friend, but the errors continue to be all yours. This is the 3rd time I've replied and obliterated your arguments (pointed out the errors in your thought processes). I went down all the same thought processes you did, but had since realized that indeed Satoshi was a genius.

Maybe you will finally pop out of your pompous confirmation bias delusion and start to realize it too. I hope you aren't going to make a total asshat of yourself as Jorge Stolfi hath done. I know you are smart enough to stop insisting when you've become informed. You are informed now.

The first piece in bold is the network configuration we talked about earlier: the backbone of miner nodes, and all others directly connecting to it, no more P2P network. (has nothing to do with the current subject, but I thought it was interesting to note that Satoshi already conceived the miner centralization from the start).

The second part is indeed considering bitcoin scaling on chain to VISA-like transaction rates, with the chain growing at 100 GB per day.  He's absolutely not considering a P2P network here, but a "central backbone and clients" system.

I quoted that back in 2013 for making my points about Bitcoin was designed by the global elite.

The point however, is the fact that most certainly, he doesn't think of any RAM limits on cryptographic hashes and hence on the maximum amount of existing UTXO permissible.

No you don't understand.

The DRAM issue is for the early decentralized years of Bitcoin's adoption so that the illusion of the idealism could be sustain long enough until Bitcoin becames unstoppable.

Given that Bitcoin will be a settlement layer between power brokers, then UTXO will not be a problem and also centralized mining will mean that DRAM cost is also not a problem.

Bitcoin has already reached the stage of being unstopped, which will be proven when it can't be mutated. Not by anyone. Not even national governments. NWO will be another thing however.