Edit: perhaps the following can be ignored for now as I was using v1.2.3. I'll upgrade to
v1.3.1 and see how it goes.
---
I had 2 instances of ccminer-1.2.3 being killed by the kernel.
The only message to the console is "Killed".
The first time on Windows with a gtx 1070, the second was on Linux with a 1080ti.
There is a possibility that ccminer is a victim of the OOM killer. I have experience
where the process denied memory was not the source of the leak. Considering
the two incidents occurred on different OSes makes that unlikely. The only
other common application is Firefox so that is the only plausible cause of the leak
assuming ccminer was in fact a victim.
The only other common factor is both were using Pascal GPUs with sm6.1 but this doesn't look
like a GPU issue to me (I'm certainly not a Cuda expert)
The problem is very intermittant (only happened to me twice) and isn't realated to session length.
I have had sessions run for over 24 hours on both systems but one of the two incidents happened
after only 14 minutes run time. That is not a s low leak.
There appears to be a trigger that requires a rare set of circumstances, but once triggered memory is
exhausted quickly.
It would be reasonable to conclude the memory issue is related to code specific to MTP. I haven't seen
this with any other forks of ccminer I've used.
The infrequent nature of the problem and lack of precise coonnection between the trigger event and the
termination of the process make it virtually impossible to troubleshoot.
Then again the infrequent nature makes the problem less serious. But I thought I'd document what I could
just in case my analysis leads to some inspiration.
Edit Nov 7:
I have seen one instance of "killed" on v1.3.1 Linux.
I now suspect the trigger may not be ccminer, but another application that pushes mem over the edge.
I also suspect the edge is close because of the 45-49 GB VM used by ccminer.
The amount of VM apparently used greatly exceeds the amount of VM in my system: 16G RAM + 2 GB swap file.
Edit Nov 11:
Another revision. I have seen the problem with v1.3.1 on Windows, no console error message but a pop-up
saying essentially the same thing.
The high VM usage seems to be a non-issue as it is also observed using tpruvot fork with different algo.
Current speculation is an intermittant leak, likely in conditional code related to MTP.
sorry for the delay in replying (haven't checked that post in a while).
'Killed' usually means that the program was killed by the system as the system was running out of memory (ie: memory leak).
These problem have been addressed recently, using the most recent release on github should solve that issue.
2Gb of swap file might be a bit tight, on mtp, each gpu requires at least 4.5Gb of swap per gpu.
The high VM usage is a consequence of 2 nvidia drivers releases which were released around that time (roughly), you need to upgrade to the most recent drivers where this issue hasn't been noticed.