Unless I'm mistaken, the source code links on that page are dead ends. Also, unless I'm mistaken, it is using a different algorithm and is running on smaller numbers. A useful thing to find would be a CUDA (or, ideally, an OpenCL, which runs on both nVidia and AMD) program that performs fast modular exponentiation of large numbers (i.e. larger than 64 bits; the numbers needed are on the order of 256 bits or larger). Modular exponentiation is at the heart of both the Fermat Primality Test and the various forms of the Euler Lagrange Lichfitz Test. It would also be good to find a sieving method implemented in a GPU-specific language, as that is the other math being done in the search miners are doing.
It looks like the linked page implements (or, would have implemented, if the links weren't broken) the Ulam Spiral, which is a method of finding lots of primes and is a strong proof of primality (as opposed to the Fermat Test, which only shows probable primality). It, like the Sieve of Eratosthenes used in the generation algorithm right now, can be used to calculate every prime number up to a certain number. If it can be stopped early and still give useful information about a set of numbers then this sieve is applicable to Primecoin (i.e. if you can run some number of iterations of the Ulam Spiral where you only care about a very sparse subset of numbers and when you're done it has eliminated a lot of the composite numbers). However, without source code it isn't really a big help to developers right now.