Post
Topic
Board Announcements (Altcoins)
Re: BYTEBALL: Totally new consensus algorithm + private untraceable payments
by
Come-from-Beyond
on 17/03/2017, 13:55:54 UTC
2. Anybody in here can grab todays copy of Byteball database which is around 1.4GB and it compresses with gzip -9 down to 300MB.

Here we go:

How much hdd space do I need to download full DAG today? Is DAG somehow limited in size, or someone can make billions of transactions and bloat it?

Please read the OP:
The fees paid for storing one’s transactions (or any other data) in the Byteball database are equal to the size of the data being stored.  If the size of your transaction data is 500 bytes, you pay exactly 500 bytes (the native currency of Byteball) in fees.

So what? With a fee of 50$ someone can make DAG twice as big overnight? With a fee 500$ can make it 10 gb more? There is people who can do this just for fun. Is there some prunning mechanism what will allow to cut old transactions from database?

There isn't. I'm pointing this out for a long time but nobody is listening. Byteball has the same scalability problem like any other blockchain with adjustable blocksize limit. Database grows indefinitely and hardware and bandwith are the limiting factors. Moreover if somebody wants to attack byteball by sending huge data to the database it''s pretty easy and cheap at the current price. 8 years old Bitcoin blockchain nears 100 GB and you can make byteball database that big in 1 day for just $6700.

we definitely need an explanation from dev about it.

I dont think the problem exists today, of too fast growth too big load on nodes, hence low priority task. 100GB for bitcoin database is small anyway, compare with how much storage a random bank requires to run its business? Byteball database does grow fast, it can compress well, there can be other implementations to make it even smaller.

It's quite obvious that once people start to care about their GBs they'll do everything to spend as less them on fees as possible. It's a no-brainer to compress data before pushing them to Byteball storage. As the result most of data in Byteball DB will already have near-max entropy. At this point lossless compressing won't give noticeable benefit.

I hope you get now why that your post was misleading...