This comment makes no sense!
Both of those lines do literally the exact same thing. The first one might actually make some compilers mad for setting a uint32_t (32 bits!) to a value that appears to be padded out to 48 bits in length (why?!)
In C, setting "a = 0xFFFF", "a = 0x00FFFF", "a = 65535", "a = 0x0000FFFF", all have the same EXACT RESULT. The
only difference is readability.
Please explain why you would ever use the first statement.