Post
Topic
Board Mining (Altcoins)
Re: PhoenixMiner 2.6: fastest Ethereum/Ethash miner with lowest devfee (Windows)
by
PhoenixMiner
on 14/02/2018, 09:47:25 UTC
First i've seen of this. I am anxious to try out a new miner. Any chance of you adding support for Zcash? Or a miner for Zcash? Would be a great boon

Mistercoin-
   We have to first implement a lot of other features, and then, possibly, add support for other algorithms.

I would like to understand further the lines of text printed on the console while the miner application is running.
Could you help me understand:

1. How to find out which GPUs are idle (waiting for job) and which GPUs are busy solving an algorithm?
2. How to find out how long it took to finish a share? and are shares solved by 1 GPU only or which GPUs helped to complete the share?
3. How to find out when a solution was submitted?
4. What is difference between "New job found... from " and "ETH share found"?

Let me know if these are details found on console of the app, or such debug strings can be printed by adding a parameter to the script, or such details are unavailable or do not exist.

Reason for asking this is that we could be too caught up with hashrates (processing power) and yet forget that idle GPUs (due to absence of a job or difficulty finding a share) affect performance too. No point overclocking GPUs only to be stuck waiting like a Ferrari stuck in traffic.
  The GPUs are not sitting idle waiting for work if they show a hashrate. Generally, the pool sends work immediately after connecting to it and all the GPUs work on the same "job", without any periods of idling. You can see the number of shares found by each GPU shown in the statistics in parenthesis after the hashrate like this (also note that the GPU3 had one incorrect share):
Quote
Eth speed: 141.837 MH/s, shares: 169/0/0, time: 1:29
GPUs: 1: 28.508 MH/s (27) 2: 28.309 MH/s (32) 3: 28.315 MH/s (37/1) 4: 28.309 MH/s (34) 5: 28.397 MH/s (39)
New job is not "found", it is send from the pool, the miner (tries to) finds solution (aka share), which is send to the pool as a proof that the miner is working. In the long run, the number of shares will be proportional to the average hashrate of each GPU.

If you have multiple GPU's in a system you will need to do this for each GPU.
Disable all the GPU's but one. Find the best number for that GPU.
Record that number then disable that GPU and enable the next one.
Repeat until you have all the setting for each GPU.

This can take some time, but it’s worth it.
  Nice description. One small advice - you don't have to disable the other GPUs unless the rig is hitting thermal or power limits and throttling as a consequence - just ignore the hashrates of the other GPUs and only try to maximize the hashrate of the GPU you are currently adjusting.

The only thing I don't understand is why GPU0 is GPU1. This makes the fan speed and temp readout in the miner not line up with the correct cards.

Fan temp on GPU0 = GPU5 in Phoenix             Hashrate of GPU0 = GPU1 in Phoenix
Fan temp on GPU1 = GPU1 in Phoenix             Hashrate of GPU1 = GPU2 in Phoenix
Fan temp on GPU2 = GPU2 in Phoenix             Hashrate of GPU2 = GPU3 in Phoenix
Fan temp on GPU3 = GPU3 in Phoenix             Hashrate of GPU3 = GPU4 in Phoenix
Fan temp on GPU4 = GPU4 in Phoenix             Hashrate of GPU4 = GPU5 in Phoenix
Fan temp on GPU5 = GPU6 in Phoenix             Hashrate of GPU5 = GPU6 in Phoenix
Fan temp on GPU6 = GPU7 in Phoenix             Hashrate of GPU6 = GPU7 in Phoenix
Fan temp on GPU7 = GPU8 in Phoenix             Hashrate of GPU7 = GPU8 in Phoenix
Fan temp on GPU8 = GPU9 in Phoenix             Hashrate of GPU8 = GPU9 in Phoenix
  We've seen similar things on AMD cards because of broken support for bus ID in the drivers but not on Nvidia cards. The ordering is based on the bus ID (ascending order) and the fan/temp reporting is matched to the bus ID as well, but in same cases like this one the NVML library obviously doesn't report the correct bus ID for the GPU5 and it ends up as the first. Could you press the 's' key on your keyboard and copy the list of GPUs with their bus IDs (it may be easier to do it from the log file than from the screen)?

Isn't there a way to automate this? Have the miner calculate a moving average while it's changing this parameter, thus finding out the best for each card.

"PhoenixMiner, the miner taylored to your cards" does sound appealing.
   There is but given all the other features that wait to be implemented, this is going to wait its turn. Also, it would require that the rig has absolutely stable conditions during the adjustment period or the results will be invalid.

Using MSI Afterburner
   Hopefully, 2.7a would help with this as well. Still, the only real solution is to add over-clocking options in the program itself and periodically reset the clocks if they are running away from the intended values. We are implementing this for AMD cards now and we will try to do it for Nvida cards as well even though the relevant functions of NVAPI are proprietary.

I have been using Phoenix Miner for about 2 days and its working better than claymore, getting 0.5mh higher compared to it and temps are a degree low, small improvement but still an improvement but i have an issue with this miner from time to time i get this error

17575:19:25:26.793: GPU1 CUDA error in CudaProgram.cu:433 : unknown error (999)
17575:19:25:26.795: GPU1 GPU1 search error: unknown error
  If this always happens on the same GPU (GPU1 in this instance), error  999 is strongly connected to too high memory overclock. Dial back the memory clocks by 20-30 MHz on the corresponding GPU and see if this would help.

I haven't read through every reply, so I apologize if this has already been answered.
You say a Linux version is planned but may take some time. Do you have any estimate at all as to when that would be? This miner sounds excellent and I'd love to switch my rigs over.
  We have to implement: hardware control and dual mining before starting to work on the Linix version seriously. So no sooner that a few months unfortunately.