Search content
Sort by

Showing 5 of 5 results by wpwrak
Post
Topic
Board Mining (Altcoins)
Re: [Review]: Linzhi Phoenix - 2700 MH/s Ethash asic miner
by
wpwrak
on 13/01/2021, 17:55:48 UTC
Did you ever test this on etc?  I am a bit confused as to why this machine would work on etc now as I thought I read that they changed the algorithm slightly to protect from asics and 51% attacks.  If they did and this could still mine ect even after the change then it might be viable past September.

ETC has made a small change to the DAG generation algorithm (now called ETChash). PoW is still the same Dagger-Hashimoto. And yes, the miners support Ethash and ETChash.

Here are Linzhi's miners on ETC:
https://etc.crazypool.org/#/account/0x3091385d7AAa9d801c3e4C56A8dF19B3cDB0a35b

(Most of the machines we currently have on ETC are in R&D, that's why the overall performance is a bit bumpy.)
Post
Topic
Board Development & Technical Discussion
Re: stupid question: why not move transactions outside blocks ?
by
wpwrak
on 30/12/2017, 18:50:54 UTC
Maybe I misunderstand you, but what you describe sounds pretty close to what SegWit is already doing:

https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki

Hmm, I'm still struggling to understand what SegWit exactly is :-) The BIP touches a lot of issues and suggests many future developments, so I'm not sure how much of what SegWit is intended to do is actually implemented.

One explanation I found basically describes it as an accounting trick: you move part of the data in the block to a different place and declare bytes there to be smaller than bytes elsewhere, which allows you to grow blocks a little without exceeding the 1 MB limit. But, if I understand this right, the transactions would still be part of the same block.

If SegWit allows most of the transaction data to travel through a different channel, then it would be indeed what I've been looking for.

Thanks !

- Werner
Post
Topic
Board Development & Technical Discussion
Re: stupid question: why not move transactions outside blocks ?
by
wpwrak
on 30/12/2017, 18:36:25 UTC
The actual transaction data would still have to reach every node in the network.

Yes, but it should be in the mempool when verified close to the time of mining, shouldn't it ?

Then it would have to be stored along with the blockchain (i.e., this doesn't help if the amount of persistent storage is an issue) and you'd need some mechanism to update nodes that don't have that data, plus you need to handle cases where, say, a new block overtakes a transaction referenced by it, but I think the "tip" of global activity should generally not need much extra work.

- Werner
Post
Topic
Board Development & Technical Discussion
Re: stupid question: why not move transactions outside blocks ?
by
wpwrak
on 30/12/2017, 17:16:20 UTC
Not sure, but, Is it not the way Lightning Network works?

I'd picture Lightning more like an account shared by two parties where deposits and withdrawals are costly but how you split what's in the account is up to the two parties involved, and nobody else needs to know about anything but the final balance.

Lightning then provides tools to manage trust issues and to let you bridge between multiple such shared accounts, forming a network.

So Lightning reduces the number of transactions that are visible on the blockchain. What I've described should be simpler and largely orthogonal: it would allow growing the amount of information covered by a block, without increasing the block size. That information does of course still have to be somewhere, so this wouldn't result in a reduction of the number of transactions nodes have to deal with, similar to directly increasing the block size, but avoiding to have to move one huge chunk of data around at the time of accepting transactions into the blockchain, since that data would already be in the mempool.

- Werner

Post
Topic
Board Development & Technical Discussion
stupid question: why not move transactions outside blocks ?
by
wpwrak
on 30/12/2017, 14:19:48 UTC
When hearing about scalability issues in Bitcoin and others, a common theme is the limited block capacity. What I immediately thought of is "why not replace the transactions with hashes ?", i.e., going from about 250 bytes to maybe 32, and if that's not enough, one could use Merkle trees. The actual transaction data would travel separately and mempool synchronization would have to be made tighter.

Now, I'm sure that I'm not the first to think of such an approach. Given that I've never heard such a thing mentioned, it must have been discussed and rejected early on. I would like to find out what problems were found with this kind of approach.

Would someone have a pointer to that discussion or a summary ?

Thanks,
- Werner