As I recall it has to do with making sure the code properly handles numbers that are over what 32-bit can natively use.
Hello. I use Big Rat (golang) from Header Hash to get the difficulty of the share. It can handle huge numbers for difficulty to compare shareDiff and networkDiff. I don't use int64 or int32.
1) That means all math calculations using that in go are exceptionally slow compared to the c-library
2) While this may give you digital accuracy to whatever level you require, it wont be infinite, so there must be some default setting you should check
Bitcoin numbers are ridiculously large compared to most other things done in the world, so you better ensure the default setting is big enough.
e.g. a 256 bit number can count roughly 1/1000th of all the atoms in the entire universe.
When I start my current version of KDB I get it to do decimal place accuracy calculations on current difficulty vs expected answers, to verify there is always enough accuracy
I also added coded to cgminer last year (in the public git) to optionally verify the binary you generate can handle 32 to 240 bits of leading zeros on a hash correctly.
(the last 16 bits have a value in it for testing)
i.e. up to 000000000000000000000000000000000000000000000000000000000000ffff
So while you say 'yeah this language will handle it', it is advisable to actually check that yourself
rather than just assume it's ok ...