I was wondering about that when I saw it but hadn't followed up (too much multitasking).
I think I found the problem, it's day 1. The argument to SetThreadAffinityMask is supposed to be DWORD_PTR,
not a DWORD, so "&affinity_mask" instead of "affinity_mask". It supports 64 CPU by casting the pointer as DWORD64_ptr.
It's all clear now. Thanks for your persistence.
Thank YOU for all your hard work.
My plan for the opt_affinity option when it is int128 is to replicate the command line 64 bit mask to the
upper 64 bits of opt-affinity. The debug output will use a union overlay to display int128 as 2 int64.
This might allow specifying affinity with more than 64 cpus on linux.
EDIT:
It's looking good. I'm able to do some basic arithmetic with __int128, in particular I can shift 64 bits
so I can easilly extract both halves for 64 bit formatting. Didn't have to use a union. My testing
capabilities are limited without a >64 core cpu bt I was able to see a correctly formed 128 bit
affinity mask.
I won't try handling the 128 bit affinity yet for big cpus on linux, I want this change to soak first.