Post
Topic
Board Development & Technical Discussion
Merits 3 from 3 users
Re: Is there a way to use Python GPU for ECC speed up?
by
NotATether
on 17/10/2022, 08:23:55 UTC
⭐ Merited by PawGo (1) ,vapourminer (1) ,ETFbitcoin (1)
I don't know any ready-to-use 256bit number numpy libraries, but it is possible to create one, using 64 or 32bit numbers for math operations.
You cannot just speed up individual operations like point multiplication by using GPU, because single CUDA core is much slower than CPU. You need to divide full computing work into many independent tasks which will run in parallel in order to get the performance gain.

I guess you could try using an algorithm that computes multiple point multiplications at once - incrementally, not using threads or CUDA cores. This will safe you time as long as you only batch multiply as many points as it takes to do (according to the paper) 5 serial ECmults.