Post
Topic
Board Mining (Altcoins)
Re: CCminer(SP-MOD) Modded NVIDIA Maxwell kernels.
by
joblo
on 01/12/2015, 00:14:53 UTC
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)

So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram

When memory is not deallocated when it is no longer needed it's usually called a leak. Is this something that cudamalloc
does transparently or does the application have any control?

Wrong - it's only LEAKED if the pointer to that memory is lost, meaning you couldn't deallocate it if you wanted to - and it happens in repeated code. To "leak," you actually have to slowly continue to eat memory until there's no more left.

You are technically correct, perhaps "hog" would be a better term. That doesn't change the point that large amounts
of CPU memory remain allocated after it is no longer needed.