Post
Topic
Board Mining (Altcoins)
Re: Claymore MRO/QCN/FCN/BCN GPU Miner v2.2
by
ol92
on 18/06/2014, 16:16:59 UTC
Claymore are you using VMPROTECT on the EXE files?

If so that could/would explain a lot of the problems people are having.  It has some problems when dealing directly with hardware.

iam pretty sure it does..... use vmprotect

If so it would explain a lot of the memory errors and loosing touch with the hardware.

Yes I use vmprotect. Do you know a lot about it to state that it's the reason of problems with GPUs? If so, let me know more details, I will be glad to check it.
I always test both original and vmprotect versions on my systems and I never saw any differences for several months in different miners and other software I created.
For CryptoNote miner, for example, both versions don't work until I set pagefile 16GB.
If you have problems with vmprotect - miner will not start on your system at all because it does its job at starting only. If you see at least one line in console after starting - everything works properly.
The problem is in AMD drivers, they don't want to work with large memory sizes. Drivers just don't allow OpenCL app to allocate a lot of GPU memory at once. For example, 290X card has 4GB but OpenCL gives me less than 1GB at once by default. GPU_MAX_ALLOC_PERCENT variable is a solution (I can assign up to 3GB), but buggy solution as you can see. And AMD found a solution - today I tried Catalyst 14.6 and even with GPU_MAX_ALLOC_PERCENT I cannot allocate more than 1GB at once now. So they solved the problem Wink I have a couple of ideas about possible workaround, I hope it will help, new version will be available soon. Anyway, working with AMD OpenCL is some kind of magic which is much more unstable than vmprotect.
Maybe this can help :
http://devgurus.amd.com/thread/159516 :
"
Hi vanja_z, welcome to the forum,

 

  Currently OpenCL users are limited to 25% of device memory,

 

I don't know where you get this from, perhaps it's a rumor, but it's certainly not correct.

(there is a 512MB limit per allocation call but you can allocate as much as you like)

 

I do predominately scientific computing and often need very large and fast memory so I am mostly using the 7970. On the 7970, I often allocate a single contiguous buffer that uses just shy of 3GB, the device limit. It's very simple, all you do is allocate in chunks of 512MB or less and make sure the chunks are rounded to about 0x4000 bytes, then they will be placed contiguously. Example, allocating 2GB you might have kernel buffers like

 

__kernel(global float *A, global float *B, global float *C, global float *D){}

 

Since this is C language and A,B,C,D are memory pointers, you can use A to reference all of memory.

Here is a printout from a typical program start:

 

open:devices 3 gpus, 1 cpu, device(0) = Tahiti

start(cl):ndevs=3 gpus=1 time=57.136



buffer 0 start 01D1E000 to 21D1E000 size=20000000  Gap = 00000

buffer 1 start 21D1E000 to 41D1E000 size=20000000  Gap = 00000

buffer 2 start 41D1E000 to 61D1E000 size=20000000  Gap = 00000

buffer 3 start 61D1E000 to 81D1E000 size=20000000  Gap = 00000

buffer 4 start 81D1E000 to A1D1E000 size=20000000  Gap = 00000

buffer 5 start A1D1E000 to B0E1E000 size=0F100000  Gap = 00000

buffer 6 start B0E1E000 to BF21E000 size=0E400000  Gap = 00000

buffer 7 start BF21E000 to BFE1E000 size=00C00000  Gap = ----  (last address on GPU is BFFFFFFC)

 

The last couple of buffers are different size for an unrelated reason. Note, I have not used GPU_MAX_ALLOC

type parameters and have never seen a need to. This also works on Cayman, and Barts devices but I prefer

Tahiti because the memory is so large and fast. Sorry, I don't know much about Nvida devices because I

usually choose hardware based on specifications.

 

Hope it helps."