I've been looking at the code, and theres quite a few
magic numbers in there

Can you clarify this? If you mean "32" that is the byte size of the field elements used for all of the arithmetic. If you mean the 32-bit hex numbers, those are parameters of the curve as defined by NIST. If you mean "0", "-1", "-2" error return values, I'd guess that a PR to #define better names for these would be welcome.
Edit: I've been reminded that the secp256k1 curve is actually defined by SECG, not NIST. My bad.
The curve constants I'm fine with. There seems to be a few loops with arbitrary upper limits
void print_number(double x) {
double y = x;
int c = 0;
if (y < 0.0) y = -y;
while (y < 100.0) {
y *= 10.0;
c++;
}
printf("%.*f", c, x);
}
[code]
Why 100?
[code]
int secp256k1_ecdsa_verify(const unsigned char *msg32, const unsigned char *sig, int siglen, const unsigned char *pubkey, int pubkeylen) {
secp256k1_ge_t q;
secp256k1_ecdsa_sig_t s;
secp256k1_scalar_t m;
int ret = -3;
DEBUG_CHECK(secp256k1_ecmult_consts != NULL);
DEBUG_CHECK(msg32 != NULL);
DEBUG_CHECK(sig != NULL);
DEBUG_CHECK(pubkey != NULL);
secp256k1_scalar_set_b32(&m, msg32, NULL);
if (!secp256k1_eckey_pubkey_parse(&q, pubkey, pubkeylen)) {
ret = -1;
goto end;
}
if (!secp256k1_ecdsa_sig_parse(&s, sig, siglen)) {
ret = -2;
goto end;
}
if (!secp256k1_ecdsa_sig_verify(&s, &q, &m)) {
ret = 0;
goto end;
}
ret = 1;
end:
return ret;
}
I guess we are just setting returns to negatives to represent errors?
void bench_scalar_sqr(void* arg) {
int i;
bench_inv_t *data = (bench_inv_t*)arg;
for (i = 0; i < 200000; i++) {
secp256k1_scalar_sqr(&data->scalar_x, &data->scalar_x);
}
}
Why 200,000?
Sorry, just trying to understand the code better.[/code][/code]