Open Aurel300 opened 3 years ago
I never liked how we silently create floating point numbers from literals that look like integers. I understand why this happens on a technical level, but it's still pretty confusing when you don't know what's going on.
On the other hand, I don't know if I want to randomly change this... this is probably one of these situations where the best move is not to play.
I always thought it's a "feature" to specify bytes of an integer value.
For example 0xFFFFFFFF
(4 bytes) compiles to literal -1
I never liked how we silently create floating point numbers from literals that look like integers. I understand why this happens on a technical level, but it's still pretty confusing when you don't know what's going on.
On the other hand, I don't know if I want to randomly change this... this is probably one of these situations where the best move is not to play.
imo it's nice to have a warning here, at least for 2147483648
.
and from javascript's perspective, 0x80000000
being negative is the weird one. -0x1
should never look illegal.
I never liked how we silently create floating point numbers from literals that look like integers.
Perhaps we should issue a warning then? From the user's end it's easily fixed by adding .0
to the literal (and is also fully backward compatible with older Haxe versions).
I like the idea with a warning.
Is that warning would be consistent for all too big int representations? 0xFFFFFFFF
is used at least in many Kha projects, and would be nice to have that value option to set white color, instead of -1.
0xFFFFFFFF
is used at least in many Kha projects, and would be nice to have that value option to set white color, instead of -1.
It is odd that the compiler would let this overflow instead of leaving it to the runtime. Is that really by design?
That said, upon extracting channels with bitwise ops the result should still be 0xFF for each channel - so essence it is still white, no?
Is that really by design?
I think it's a consequence of using ocaml's Int32
to represent integer constants in the compiler.
Yeah, but that doesn't actually seem to be the case.
The confusion here comes from this issue title being inaccurate: it parses as Int
but it not typed as such, which is where the int32
stuff comes in.
Yeah, but that doesn't actually seem to be the case.
https://github.com/HaxeFoundation/haxe/blob/f5986de3c918b887a231e36d16334aa3974cf3aa/src/core/tType.ml#L82-L83 https://github.com/HaxeFoundation/haxe/blob/f5986de3c918b887a231e36d16334aa3974cf3aa/src/core/texpr.ml#L569-L574
Ocaml's Int32.of_string
successfully parses hexadecimal strings as a byte representation of Int32
value.
Ah. Hmm. That's unfortunate. But it brings me back to my previous question: is that by design?
There's of course the possibility that changing the behavior to avoid the overflow (where possible) would break existing code, although I wonder if anyone would really write 0xFFFFFFFF
to express -1
.
(it's the same number with only bit 31 set)
On second thought maybe this is fine, our decimal literals should only support the 32-bit signed integer range which goes up to 2147483647.