Now that I think of it, we did talk about the floating point format in that discussion. 8-decimal divisibility was the maximum Satoshi would consider, for that reason (although he was a fanatic about doing everything with unsigned integers). Hal's point about the smallest division being less than a penny, and that being possible even if the whole world's money supply were denominated in Bitcoin, meant no extraordinary measures were necessary.
One thing I learned, was that in C, numeric overflow is undefined behavior on signed integers, and some compilers (notably gcc, which Satoshi was using) will even eliminate overflow checks, and then drop any error handlers or commands to output debug messages as dead code. Which is a reason why Satoshi was such a fanatic about using unsigned integers everywhere.
IOW, if you check for overflow by writing code like
int a,b, c;
.....a and b acquire values ....
if (a > 0 && b > 0 && a+b <= 0) {
errprintf("integer overflow at checkpoint 232\n");
halt(-1);
}
c = a+b;
gcc will eliminate the whole clause because, in its tiny little brain, integer overflow is undefined behavior so it can do whatever it likes with programming statements that depend on integer overflow having semantics. And since it's trying to make the code shorter, faster, and less complicated, it just drops 'em.
But if you check for overflow by writing
unsigned int a,b,c;
.... a and b acquire values ....
if (a+b < b || a+b < a) {
errprintf("unsigned integer overflow at checkpoint 233\n");
halt(-1);
}
c = a+b;
gcc will actually output the code to do that, because unsigned integers are specified as having modular addition and subtraction so it can't replace the if condition with if(0) and then drop the whole thing.
Not that I ever saw such baroque error checking in the Bitcoin code.