I using PM 3.0c and every 5 sometimes 6 days PM stop working. What happend and how to fix this?
Have maybe someone .bat file for auto restar/reboot PM if PB stop working?
2018.07.26:00:42:06.959: eths Eth: Received: {"jsonrpc":"2.0","id":0,"result":["0xda40f35db43beeb3a7e6b66304068bbc0ca9ef3c25c4a8d68dad6a7de14835f0","0x0565d880e378b22250f35f260bac49983734114a9d338b7d7bacea1985c67dd4","0x000000006df37f675ef6eadf5ab9a2072d44268d97df837e6748956e5c6c2116"]}
2018.07.26:00:42:06.959: eths Eth: New job #da40f35d from eth-eu1.nanopool.org:9999; diff: 10000MH
2018.07.26:00:42:07.210: GPU3 GPU3: Starting up... (0)
2018.07.26:00:42:07.210: GPU3 Eth: Generating light cache for epoch #201
2018.07.26:00:42:07.370: GPU2 GPU2: Starting up... (0)
2018.07.26:00:42:07.396: GPU4 GPU4: Starting up... (0)
2018.07.26:00:42:07.396: GPU5 GPU5: Starting up... (0)
2018.07.26:00:42:07.422: GPU1 GPU1: Starting up... (0)
2018.07.26:00:42:07.429: GPU6 GPU6: Starting up... (0)
2018.07.26:00:42:10.009: GPU6 GPU6: Generating DAG for epoch #201
2018.07.26:00:42:10.009: GPU5 GPU5: Generating DAG for epoch #201
2018.07.26:00:42:10.010: GPU3 GPU3: Generating DAG for epoch #201
2018.07.26:00:42:10.011: GPU4 GPU4: Generating DAG for epoch #201
2018.07.26:00:42:10.011: GPU2 GPU2: Generating DAG for epoch #201
2018.07.26:00:42:10.011: GPU1 GPU1: Generating DAG for epoch #201
2018.07.26:00:42:11.238: main Eth speed: 0.000 MH/s, shares: 3574/0/22, time: 52:09
2018.07.26:00:42:11.238: main GPUs: 1: 0.000 MH/s (582) 2: 0.000 MH/s (595) 3: 0.000 MH/s (591) 4: 0.000 MH/s (596) 5: 0.000 MH/s (592) 6: 0.000 MH/s (618/22)
2018.07.26:00:42:11.511: GPU5 GPU5: DAG 19%
2018.07.26:00:42:11.515: GPU2 GPU2: DAG 19%
2018.07.26:00:42:11.515: GPU3 GPU3: DAG 19%
2018.07.26:00:42:11.518: GPU4 GPU4: DAG 19%
2018.07.26:00:42:11.528: GPU6 GPU6: DAG 19%
2018.07.26:00:42:12.519: eths Eth: Send: {"id":5,"jsonrpc":"2.0","method":"eth_getWork","params":[]}
2018.07.26:00:42:12.546: eths Eth: Received: {"jsonrpc":"2.0","id":0,"result":["0xda40f35db43beeb3a7e6b66304068bbc0ca9ef3c25c4a8d68dad6a7de14835f0","0x0565d880e378b22250f35f260bac49983734114a9d338b7d7bacea1985c67dd4","0x000000006df37f675ef6eadf5ab9a2072d44268d97df837e6748956e5c6c2116"]}
2018.07.26:00:42:13.011: GPU5 GPU5: DAG 41%
2018.07.26:00:42:13.036: GPU6 GPU6: DAG 41%
2018.07.26:00:42:13.223: GPU3 GPU3: DAG 44%
2018.07.26:00:42:13.225: GPU2 GPU2: DAG 44%
2018.07.26:00:42:13.230: GPU4 GPU4: DAG 44%
2018.07.26:00:42:13.698: GPU1 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.700: GPU1 GPU1 initMiner error: unspecified launch failure
2018.07.26:00:42:13.711: GPU3 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.711: GPU3 GPU3 initMiner error: unspecified launch failure
2018.07.26:00:42:13.711: GPU4 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.711: GPU4 GPU4 initMiner error: unspecified launch failure
2018.07.26:00:42:13.785: GPU6 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.785: GPU6 GPU6 initMiner error: unspecified launch failure
2018.07.26:00:42:13.801: GPU2 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.801: GPU5 CUDART error in CudaProgram.cu:188 : unspecified launch failure (4)
2018.07.26:00:42:13.801: GPU2 GPU2 initMiner error: unspecified launch failure
2018.07.26:00:42:13.801: GPU5 GPU5 initMiner error: unspecified launch failure
I would mitigate this problem using the option -lidag:
-lidag Slow down DAG generation to avoid crashes when switching DAG epochs
(0-3, default: 0 - fastest, 3 - slowest). You may specify this option per-GPU.
Currently the option works only on AMD cards
So try with "-lidag 2".
From your above logs the problem with the gpus happened during the DAG generation. DAG generation is more intensive compared to normal hashing and when your OC is on the limit, it will crash the GPUs.