phase1
- identify the cause of the swings more than 3 and 1/3
It is limited to 3 and 1/3 diff change. But the new diff is calculated from the average diff over the previous 24 blocks. So you can't compare the diff to the previous block as displayed in the list. Please see https://github.com/nlgcoin/guldencoin/blob/master/src/main.cpp#L1286 and line 1299 and lines 1303-1306.- work out a solution for point 1 first
We could change it so the diff is calculated only from the diff of the previous block.. Then the diff change between individual blocks could never exceed 3x or 0.33x. But that also means the diff can go x3^3 (x27 !!!) when there are 3 lucky/fast blocks. I don't think that is a smart thing to do.- if the fix for point 1 is outside DGW code see if DGW is acting as expected in the first place
I don't think point 1 needs to be fixed. I'm not aware of any code influencing the difficulty outside DGW. DGW is acting as it should (as it is doing in other coins), but simply can't handle these hash/sec spikes. I would love to see some input or simulation why this happens. Till now I'm not able to pinpoint the large swings we encounter. Your right it could be the large hash swings but why does NLG have more problems with that? So the more people look at the code from start to end and posibly come with a detailed reason the better. phase 2
- identify the problem and descibe it properly (yeah I know it's jumppool, but whats really the undelying thing that causes the trouble)
You're right. It is good to have a verbose explanation with some examples. As long as we keep in mind that diff readjustment should work for more problems than only this one.- work out a modification of the algo by testing it on past hashrate swings and see if it smoothens out those.
Testing is very important. We shouldn't release a new algo too fast. Placing a new algo on the existing chain doesn't give a proper indication though, as it does not influence block times and doesn't change multipool behaviour. It would be better so simulate several cases of hashrate joins/leaves. I have done this before with DGW3, but never the amounts that we currently see. This time we must simulate extreme hashrates, even multiple times more than we currently see with clevermining. I plan to release the software I used for simulations, just needs some polishing and persistency of scenario's (currently the user must enter the block times manually, each time). That way everyone can apply and test changes locally.- test the modification thorough for vulnerabilities and flaws
Very important! I think 24Kilo can be of great help here- implement it
This time deployment will be very smooth. We can release a new algo one week in advance with a hardcoded block on which all nodes switch algo. That's great because this means you can also first modify DGW to act and at a later stage release a permanent - how permanent is a fix in crypto... - fix. I really like the weighted average idea's that are being discussed here. I had the idea of weighted average in my mind, but I didn't have it worked out yet. It's great to see the discussion here, please keep it going! It is definitely influencing the code I'm writing.
Please understand that when I release the code I have so far: it's not a final version. We do this together. It will be our algorithm; created by the Guldencoin community. So the code is open for feedback and changes. This is the only way that we can fix this problem. We all have great idea's and knowledge. When we keep combining our efforts we will create the best possible algorithm.You're very right here