Currently, the default integer type is a BigInt. It is not bounded, so I have to check for overflows. The casting behavior that I defined with code (not tests) is:
If the number is negative, interpret it as zero
Else, take the minimum between the number and the maximum word-sized unsigned integer.
Currently, the default integer type is a BigInt. It is not bounded, so I have to check for overflows. The casting behavior that I defined with code (not tests) is: