Depends on speed. We know sieving is running about 25x, but how much of the CPU time was used for sieving and how much for fermat testing? Let x = CPU sieve time and y = CPU Fermat time. If x=y (50/50 split) then GPU = 2% for 52% of normal time = 92.3% speed increase. If x=9y (90/10 split) then GPU = 3.6% for 13.6% of normal time = 635% speed increase. if 9x=y (10/90 split) GPU = .4% fo 90.4% of normal time = 10.6% speed increase. These are extremely rough figures, communication times between GPU/CPU will also apply. Maybe he'll solve Fermat testing on the GPU and get a full 2500% speed increase. Short answer: too many current unknowns.
I did a similar calculation, but then Koooooj@reddit pointed out that I was wrong. Because with a much powerful sieve, CPU gets less test to do for fixed number of output.
for example (with made up numbers):
before: 10000Numbers ---sieve--> 100Numbers ---test---> 1Number
after : 10000Numbers ---sieve--> 10Numbers ---test---> 1Number
this way, even we don't speed up the execution of Fermat test, the test time is down to 10%.