Open rickwebiii opened 5 years ago
Of course it doesn't quite work out, as neither Infinity
or Infinity as const
will satsify type Inf
. Pretty cool though!
@nmain Fun trick. Github definitely needs a "please don't put that in your code reaction" 😊
Infinity
will not satisfyInf
🤔
// THIS CODE IS EVIL AND YOU SHOULD RUN AWAY FROM IT
type Infinity = 1e999;
if (!((x: any): x is Infinity => true)(Infinity)) throw new Error();
const a: Infinity = Infinity; // okay now
I don't care what the warnings are. I want those NaN and Infinity literals. I also want a "finite number" type.
I'm going to find some way to shoehorn that Infinity trick into my code and see if anything explodes
IEEE INF
has some really useful properties which aren’t strictly mathematically correct (such as x / 0 = inf
or x % inf = x
) that I’ve actually found it useful in practice from time to time.
That said—not useful enough to inject that much evilness into my code. :smile:
type NegativeZero = -0 extends 0 ? 'yes' : 'no'; // Resolves to 'yes'
This one is kind of interesting, because technically by the meaning of extends
, the conditional type is correct anyway: -0
is indeed substitutable for 0
(as they compare equal). Likewise for the ones with rounding errors.
At the end of the day, floats are goofy and it might be fine to let the type system reflect that except in a few edge cases you really want to manage. There are an infinite number of decimal representations (memory granted) of every non-NaN number, and I don't think you can rightly ordain one as provenance. This means roundoff on types is just going to be a thing.
+- 0 is usually not a useful distinction. Most of the cases I can think about are pedantry. For example, Math.sign returns -0 if you pass -0, so you could type its return value as -1 | -0 | NaN | 0 | 1. Without the -0 type, you might be surprised that at runtime 1 / Math.sign(x) can return -Infinity while the return type led you to believe you had to only check NaN, Infinity, +-1. But this is also moot without a NaN literal type.
I don't know that much about type theory, but I think of the number type as being a very large finite union of one representation of every non-NaN floating point binary sequence (including distinct +0 and -0 and +Infinity and -Infinity) and a single NaN. NaN has many binary representations, but you can't tell them apart in Javascript. This is regardless of what the equality operator does. This looks like:
NaN | -Infinity | FLOAT_MIN | FLOAT_MIN + ULP | FLOAT_MIN + 2 * ULP | ... -ULP | -0 | 0 | ULP | ... | FLOAT_MAX - 2 * ULP | FLOAT_MAX - ULP | FLOAT_MAX | Infinity
This is how I mentally reason about floating point numbers and having a type system that matches that would be convenient for reasoning about Typescript numbers. However, 1:1 matching the floating point number line to type literals my not be desirable in a type system for reasons I don't yet understand as I'm not a compiler expert and it probably isn't worth the effort to accomplish this as has been stated in other threads.
At the end of the day, I don't know if I've ever even used number literal types other than in a few toys to show you can; I find strings and enums to be more semantically meaningful for representing finite sets of things, so I use those literal types almost exclusively. With numbers, I almost always want to add, subtract, and multiply arbitrary values at which point you're going to widen any literal types you have anyways.
That being said, if Typescript is choosing to disallow NaN and Infinity types, then I think it should disallow anything that parses to +-Infinity as well as the literal word Infinity. 🙃
One of my favorite anti-tautologies in Javascript is that typeof(NaN) === 'number'
; that which is in fact not a number is in fact number. Forget the usually silliness people point out like [] == false
. You just don't get great existential conundrums in languages that make developers choose the signedness, size, and fixed or floatiness of their numbers.
IEEE floats actually aren’t that goofy (outside of signed zero and NaN which are IMO both abominations) once you realize it’s just scientific notation in base 2 instead of 10. You have a finite number of bits to represent the mantissa, so it’s only natural that things like repeating decimals and irrationals can’t be represented in that system. That’s also where the “floating” in floating point comes from - increasing the exponent moves the decimal point within a fixed-size mantissa, so you gain insane amounts of range at the expense of precision on the high-end. It’s kind of fascinating—to me anyway.
Even the NaN and signed zero serve valid purposes. I actually wrote a technical article on why we have NaN in our floating-point standard, if you are interested. The TL;DR is that if you think it's an abomination, you haven't seen the things it's replacing.
As for signed zeros, those are apparently used for contour integration in complex analysis. While I personally know what those words actually mean, I've never needed to actually implement a program that needs the distinction.
Quoting @fatcerberus
signed zero [...] abomination
Quoting @theverymodelofamodernmajorgeneral
Even [...] signed zero serve valid purpose.
[...] those are apparently used for contour integration in complex analysis
Actually, -0
means "any negative Real whose order-of-magnitude is too low to be representable as a fixed-precision Float", which is very useful in the context of calculus, because of limits. Limits are the easiest part of calculus, and they are implicitly used outside of calculus as well. This is the reason why 1 / 0 == Infinity && 1 / -0 == -Infinity
instead of NaN
. See this video by Jan Misali for more info.
However, to be fair, Object.is(Math.sqrt(-0), -0) && Object.is((-0) ** +0.5, +0)
is extremely weird and non-sensical. See related Reddit post
As explained in #15135 and #15351, the Typescript team doesn't intend to support NaN and Infinity literals, due to complexity. Explicitly typing something as Infinity or -Infinity gives the error:
As it turns out, you actually can get the Infinity type fairly easily. Just use a really large number that parses to infinity as your type, like 1e999. -1e999 gives you negative Infinity. Fortunately, I don't believe you can get NaN from just parsing a double precision literal other than NaN itself. 0/0 is the simplest way to get NaN, but that's 2 literals and a divide.
There are a number of other floating point literal peculiarities resulting from round-off as well. I fully expect the Typescript team to "won't fix" them:
Here's another.
In this case, -0 and 0 are actually distinct numbers with different bit patterns. Both Chrome and Firefox dev consoles correctly echo -0 as -0 rather than 0. However, the IEEE-754 standard defines that 0 === -0, so I'm not surprised Typescript thinks they're the same type. The intent for this distinction is to convey the idea of limits as a numeric representation (not to mention floating point is sign-magnitude anyways, so why not embrace it). For example, 1/-Infinity and -8*0 both result in -0 rather than 0. For a type system, I'm not sure that you strictly need or want to maintain this distinction.
TypeScript Version: Repro'd on playground
Search Terms: Infinity, NaN Code
Expected behavior:
Actual behavior:
Playground Link: Here
Related Issues: