Post
Topic
Board Altcoin Discussion
Re: Alt Coin Scams - Quark, Dogecoin etc.
by
KlondikeBear
on 29/07/2015, 21:28:00 UTC
Hi BitcoinTalk,

I'm new to Bitcointalk but not new to cryptocurrency so I wanted to point out a couple of observations Ive made regarding alt coins.

Floating point numbers in modern computers can go up to 2^53 or aprox 9.0x10^15 units... after this point the numerical system goes haywire (google it).

Bitcoin is 2.1 x 10^15 units, at 21,000,000 bitcoins divisible to 0.00000001 and Litecoin has 8.4x10^15 units (which is uncoincidentally close to the above mentioned limit)

In simple terms therefore DO NOT put money into any coin with more than 90,000,000 coins divisible to 8 decimal places (Quark, Dogecoin etc) unless you are a speculator playing the pump n dump market well as by definition its dead in the water.

This topic has surprised me, because 99,999,999 MOON transactions are quite usual (the current worth of 100 mln is $65).
It works fine. I know it works fine for Karmacoin, too, and regarding some other bln coins.

Anybody with 100 million doge coins to test it ?  Smiley
If you want to test it with Dogecoin, 100 mln DOGE = 64 BTC, unfortunately currently I have not this amount of available bitcoins to buy DOGE and test it.
 


I've googled the subject and have found this information:

https://en.bitcoin.it/wiki/Proper_Money_Handling_(JSON-RPC)
"If you are writing software that uses the JSON-RPC interface you need to be aware of possible floating-point conversion issues. You, or the JSON library you are using, should convert amounts to either a fixed-point Decimal representation (with 8 digits after the decimal point) or ideally a 64-bit integer representation. In either case, rounding values is required."

http://www.johndcook.com/blog/2009/04/06/numbers-are-a-leaky-abstraction/
"Most explanations I’ve heard for the limitations of machine numbers are pedantic. “There are only a finite number of floating point numbers so they can’t represent real numbers well.” That’s not much help. It doesn’t explain why floating point numbers actually do represent real numbers sufficiently well for most applications, and it doesn’t suggest where the abstraction might leak.

A standard floating point number has roughly 16 decimal places of precision and a maximum value on the order of 10^308, a 1 followed by 308 zeros. (According to IEEE standard 754, the typical floating point implementation.)"

http://www.johndcook.com/blog/2009/04/06/anatomy-of-a-floating-point-number/
"A floating point number has 64 bits that encode a number of the form ± p × 2e. The first bit encodes the sign, 0 for positive numbers and 1 for negative numbers. The next 11 bits encode the exponent e, and the last 52 bits encode the precision p. The encoding of the exponent and precision require some explanation.

The exponent is stored with a bias of 1023. That is, positive and negative exponents are all stored in a single positive number by storing e + 1023 rather than storing e directly. Eleven bits can represent integers from 0 up to 2047. Subtracting the bias, this corresponds to values of e from -1023 to +1024. Define e min = -1022 and e max = +1023. The values e min – 1 and e max + 1 are reserved for special use.

Since the largest exponent is 1023 and the largest significant is 1.f where f has 52 ones, the largest floating point number is 2^1023≈ 1.8 × 10^308. In C, this constant is defined as DBL_MAX, defined in ."


If you don't agree, please argue.