Closed Igorbek closed 2 years ago
The assignments you've noted are by design. When checking assignability of function types their parameters are considered bivariant. So when assigning fa = fb
it is valid if fa
's parameters are assignable to fb
's or vice versa. Likewise for the function typed members of Array
when determining whether as = bs
is valid. See https://github.com/Microsoft/TypeScript/issues/274 for the suggestion to check these more strictly all the time.
Co/contravariance is not an easy concept for a lot of folks to grasp (even though there is an intuitive nature to the assignment questions). Explicit annotations for them can be especially difficult for people to make use of (particularly if they have to author the annotations in their own code). C# has these types of annotations now but was highly successful for many years without the additional layer of checking the variance annotations afford. My gut feeling is that I would be surprised if these ended up in TypeScript given their complexity and how 'heavy' a concept they are.
@danquirk I disagree that covariance/contravariance are such "heavy" concepts. In practice, it's generally just library authors that use them (not end users), and they're easy to ignore, so I don't think you're adding a burden to the language. Yet on the flip side, you're adding a lot of expressiveness to situations that would otherwise be less type safe.
Also, at a syntax level- having optional keywords of "in" and an "out" seems to me to be as lightweight as language constructs get.
I certainly wouldn't consider this feature high enough priority to want it added to the backlog anytime soon, but once TypeScript is more mature it could certainly be a useful addition.
So, if go with some "strict" mode only, there's no way to annotate variance, because it shouldn't introduce any new keywords. I believe in most cases, TypeScript will be able to infer variance from usage. And bivariance for functions could be a fallback if modifier wouldn't be specified.
I've read Function Argument Bivariance article, and it looks like the example might be rewritten more clear and useful without bivariance requirement:
enum EventType { Mouse, Keyboard }
interface Event { timestamp: number; }
interface MouseEvent extends Event { x: number; y: number }
interface KeyEvent extends Event { keyCode: number }
function listenEvent<TEvent extends Event>(eventType: EventType, handler: (n: TEvent) => void) {
/* ... */
}
listenEvent<MouseEvent>(EventType.Mouse, e => console.log(e.x + ',' + e.y));
@MgSam thanks for support.
Outputs are always covariant and arguments are usually contravariant. The only point where we'd need an annotation is when an argument is also used as an output. I think it would be a lot easier to grasp if all type constructors and generics came equipped with their associated map
functions, since then you just look at what order the mapped functions get applied.
@metaweta callbacks are bivariant now. It also needs clarification/make stricter.
I don't know what you mean by "callbacks are bivariant": variance is a concept that applies to a specific use of a type in a signature. I was suggesting that types used in arguments be contravariant by default; now that I think more about it, that would cause new errors in currently working code, which is unacceptable.
To avoid the issue of a new keyword, I suggest using +/- like Scala does.
How about a directive "use variance" to opt into that assumption? That way a library author can avoid having to add the contravariant modifier everywhere, while the user of the library doesn't have to worry about it. Directives are a part of ECMAScript specifically designed for this kind of scoped alteration of semantics.
I mean that callback is an argument which is a function:
class A { ... } class B extends A { ... } class C extends B { ... }
function f(callback: (b: B) => void) { ... }
f((x: A) => { ... }); // no error, callback accepts x: A, fair any B is A
f((x: C) => { ... }); // no error, callback accepts x: C, but it might be called with B which isn't C
That's what "callback are bivariant" means.
Regarding +/-, do you mean to use it with type specifier? Like:
function f<T>(x: T-[], y: T) { x.push(y); } // y: T+, by default for function arguments
Directives are a part of ECMAScript specifically designed for this kind of scoped alteration of semantics.
Which directives do you mean?
The arrow functor is contravariant in its first argument and covariant in its second. Contravariance acts somewhat like multiplication by -1.
In (a:A) => B
, A
is contravariant and B
is covariant.
In (f: (a: A) => B) => C
, C
is covariant and (a: A) => B
is contravariant, which means that A
is covariant (-1 * -1 = 1) and B
is contravariant (-1 * 1 = 1).
Regarding +/-, do you mean to use it with type specifier?
I mean to use it in the type parameters rather than the function signature:
interface Foo <-A, +B> {
map<+C>(f:(a:A) => C): Foo<C, B>;
}
Which directives do you mean?
The directive "use strict"
changes the semantics of the language within a function block or program production (usually an HTML script block). I imagine a "use variance"
directive in a module production.
The arrow functor is contravariant in its first argument and covariant in its second. Contravariance acts somewhat like multiplication by -1.
I understand this. But now TS violates this rule in case of callbacks. I didn't say how it should work, I did say how it works now.
Yikes! That's terrible.
If function parameters weren't bivariant, Array<Dog>
would not be a structural subtype of Array<Animal>
due to members like forEach
.
I take back my "that's terrible" assesment; that would only be fair for a purely functional language. Every mutable data structure is going to have this trouble. Getters are covariant while setters are contravariant: given f:(x:X)=>Y
, and arr:Array<X>
, arr.map(f)
is an array where a getter returns a Y
; for instance, this could be implemented lazily by having a getter look up an element x
and then return f(x)
. A map using X
contravariantly would take a function g:(w: W)=>X
and arr.comap(g)
would be an array where the setter would store g(w)
in the array. (I'm not suggesting comap be implemented or that map have a different implementation, just pointing out how the variance shows up in getters and setters.)
I think the article's right that the sound alternatives are too cumbersome for not enough benefit.
@RyanCavanaugh I thought Array<Dog>
wouldn't be structural subtype of Array<Animal>
due to members like push
. Methods like forEach
is fully compartible. Indeed:
interface X<T> {
out(): T;
cb(cb: (v: T) => void)): void; // like Array.forEach
in(v: T): void; // like Array.push
}
var xdog: X<Dog>;
var xanimal = <X<Animal>>xdog; // is it convertible?
// out is convertible, fair - covariance
xanimal.out(); // <Dog>xdog.out(), ok
// cb is convertible, fair - covariance
xanimal.cb(animal => cb(animal)); // xdog.cb(dog => cb(<Animal>dog)), ok
// in is not convertible - contravariance, but no error
xanimal.in(animal); // it same as...
xdog.in(animal); // error -- animal is not a Dog, but same call to xanimal don't produce an error
// here is used bivariance, but it's not fair
xanimal.cb((dog: Dog) => ...); // no error due bivariance, but it should be
So and my purpose to mark variables/parameters with variance modifier to make them convertible.
So if you convert Dog[]
to Animal[]
you actually should get out Animal[]
which means it's impossible to call push
method on it.
And even more clear example of bivariance which shouldn't be:
var f: (x: { x: number; }) => void;
var g: (x: { x: number; y: number; }) => void;
g = f; // fair
f = g; // no error, but it should be
If I understand right, you're suggesting keywords that would construct the co- or contravariant supertype of a given generic type, somewhat like extracting the real and imaginary part of a complex number. I like that idea.
Recently found similar issue, I guess.
class Base { n() { return 0; }}
class Derived extends Base { m() { return 1; }}
var base = (b: Base) => b.n();
var derived = (d: Derived) => d.m();
base = derived;
base(new Base()); // TypeError: undefined is not a function (evaluating 'd.m()')
Wondering if it could be fixed easily or not?
Found an area where covariance (IIRC) would really help: strongly (and provably) typing the element tree. You could correctly narrow a vdom's children to only the correct types, so something like React could prevent say <title>
to be a child of a <div>
, which even though HTML would accept and ignore it, React shouldn't (and doesn't AFAIK). Without covariant types, though, I've already discovered that if you use generics for attributes, TypeScript often requires an explicit parameter to prevent a type error where it wouldn't if the parameter was covariant. A concrete example would be something like this, which doesn't check without explicitly specifying types, but could be entirely inferred with covariant types:
type Child = // things...
interface Attrs {
// things...
}
interface VNode<T extends string, A extends Attrs, C extends Child> {
type: T;
attrs: A;
children: C[];
}
export const m<T extends string, C extends Children>(type: T, c?: C | C[]): VNode<T, {}, C>;
export const m<T extends string, A extends Attrs, C extends Children>(type: T, a: A, c?: C | C[]): VNode<T, A, C>;
In effect, with covariant types and what exists today (and enough patience - it'd take a long while), you could mostly prove the well-formedness of virtual DOM trees (minus quantity) and largely correctly type the DOM itself.
I discovered this working on a vdom experiment, where components can be in charge of rendering themselves (mod tree diffing). I optimally want to track both permitted attributes and permitted children, so covariance would help.
It's rather a simple tweak in the compiler code to turn off covariance for parameters. I tried it once to get a feeling of what it takes to be a happy owner of the code that works right. The biggest pain are the overloaded DOM event handlers lib.d.ts that wont compile after such tweak. I tend to think that using overloading was a poor design choice for event handlers, because effectively they are a bunch of separate functions (despite sharing the same name) rather than one with a base basic signature coupled tons of covariant overloads: https://github.com/Microsoft/TypeScript/issues/6102#issuecomment-208119175
@aleksey-bykov Covariance and contravariance usually have to be explicitly specified in type systems with subtyping (Haskell/etc. use type constraints, not actual subtyping). It's not easily inferred in general.
In most cases, it should be easy to infer.
@Igorbek I meant in the case of generics.
@aleksey-bykov Sorry for not fully reading your comment before posting...
I do suspect that adding support for this might allow fixing #6102 later as well. It'd be a breaking change, but after proper generic covariance and contravariance support is added, that would be more likely to be fixable.
Really need this feature. Not much point in using inheritance if fuctional-bivariance means that a base class can be passed to a method taking an inherited type.
This should not be possible.
class Animal {}
class Cat extends Animal {}
function run(cat: Cat) {}
run(new Animal());
@schotime it's possible not because of variance. It's possible because Cat and Animal are structurally compatible. If you define Cat
with a private field, it wouldn't work:
class Cat extends Animal { private x; }
The actual issue with variance and base/derived classes see in comment
the example that is killing me:
interface A {
x: string;
}
interface B {
x: string;
y: string;
}
function copyB(value: B): B {
return undefined;
}
var values: A[] = [];
values.map(value => copyB(value)) // fails as expected
values.map(copyB); // <-- expected to fail, but it does not
@Igorbek Apologies. This is a better example.
class Animal {
}
class Cat extends Animal { private x; }
function run(input: (a: Animal) => void) {
input(new Animal());
}
function cat(c: Cat) {
}
run(cat);
@schotime IIUC, those still structurally match.
type input = (a: Animal) => void;
type cat = (c: Cat) => void;
The problem is that TypeScript is structural, not nominal. There's currently no way to specify any nominal types other than enums, nominal subtypes of number
.
So I believe this directly depends on #202. I may be wrong, but it's an educated guess.
You need to be clearer when you say they structurally match. A callback requiring a cat cannot be happy when fed by a random animal, say, a whale. In this sense they are not a near match.
On Aug 24, 2016 7:42 PM, "Isiah Meadows" notifications@github.com wrote:
@schotime https://github.com/schotime IIUC, those still structurally match.
type input = (a: Animal) => void;type cat = (c: Cat) => void;
The problem is that TypeScript is structural, not nominal. There's currently no way to specify any nominal types other than enums, nominal subtypes of number.
So I believe this directly depends on #202 https://github.com/Microsoft/TypeScript/issues/202. I may be wrong, but it's an educated guess.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/Microsoft/TypeScript/issues/1394#issuecomment-242241651, or mute the thread https://github.com/notifications/unsubscribe-auth/AA5PzaRQaBU0xu8IvUPZ3-HRpDg2jZtjks5qjNbOgaJpZM4DFNK8 .
@aleksey-bykov I stand corrected. It may indeed not depend on #202, then.
Please see my proposal #10717 that addresses the issue. Also published version. I'm looking for a feedback from the TypeScript team and the community.
Since no one has mentioned either of the most compelling examples -- arrays and promises. I've added one for good measure:
var p: Promise<{}> = ...;
var p2: Promise<string>;
p2 = p;
p2.substr(s => s.substr(2)); // runtime error, no method substr on number
Normally you can't assign {}
to string
without a cast, but due to function argument bivarance, you can assign a Promise<{}>
to Promise<string>
, or assign a {}[]
to string[]
.
I'd say that's the most compelling reason of all, given the popularity of promises.
On Tue, Sep 13, 2016, 11:57 Aaron Lahman notifications@github.com wrote:
Since no one has mentioned either of the most compelling examples -- arrays and promises. I've added one for good measure:
var p: Promise<{}> = ...; var p2: Promise
; p2 = p; p2.substr(s => s.substr(2)); // runtime error, no method substr on number Normally you can't assign {} to string without a cast, but due to function argument bivarance, you can assign a Promise<{}> to Promise
. — You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/Microsoft/TypeScript/issues/1394#issuecomment-246730458, or mute the thread https://github.com/notifications/unsubscribe-auth/AERrBHcXTVK7XcWGPn93nlMyRsQKN_1Tks5qpsffgaJpZM4DFNK8 .
@aaronla-ms note that even non-bivariant promise types can be cross-assigned at present. It's another promise issue that masks the bivariance issue.
var p: Promise<number> = ...;
var p2: Promise<string>;
p2 = p;
p2.then(s => s.substr(2)); // runtime error, no method 'substr' on number
@yortus ah yes, I should have clarified that I'm using a simplified promise definition that doesn't suffer from the bug you mentioned:
interface Promise<T> {
then<U>(cb: (value: T) => U | Promise<U>, errcb?: (err: Error) => U | Promise<U>): Promise<U>;
}
Even then, you're able to unsoundly assign Promise<{}> to Promise
FYI this came up in the suggestion backlog slog as parameter bivariance is quite problematic for Promises specifically. We'll be thinking about this more - co/contravariance annotations are the only way out of that particular pickle and it may not be as scary as we think it is.
Awesome, thanks for considering this! I know its from a couple years ago, but re your comment:
If function parameters weren't bivariant, Array
would not be a structural subtype of Array due to members like forEach.
Array<T>.forEach
by itself doesn't seem like it would cause any trouble if Array
was solely covariant instead of bivariant. E.g. in .NET you have ForEach<T>
available as an extension method of the covariant IEnumerable<T>
.
However it is true that when you arrive at index assignment you are forced to make Array
itself invariant. You can work around this by having two interfaces, a covariant "iterable" interface and a contravariant "indexable collection" interface like so:
// Mark this as covariant
interface Iterable<out T> {
readonly[index: number]: T;
length: number;
}
// Mark this as contravariant
interface NumericHash<in T> {
[index: number]: T;
}
interface Array<T> implements Iterable<T>, NumericHash<T> { ... }
Now in the consumer's code:
interface Animal { eat(): void }
interface Dog implements Animal { woof(): void; }
let animals: Animal[];
let dogs: Dog[];
let llama: Animal;
let dog: Dog;
// These two are illegal (due to appropriate variance rules)
dogs[0] = llama;
animals.forEach((d: Dog) => d.woof());
// However, these two are fine
animals[0] = dog;
dogs.forEach((a: Animal) => a.eat());
@masaeedu the code
// Mark this as contravariant
interface NumericHash<in T> {
[index: number]: T;
}
would not really work since the indexer here is for both read and write. Contravariant type would require only write-only side of the index signature which is not supported at the moment.
In my proposal #10717 I'm pointing a way to leave types like Array
unchanged but leverage use-site variance annotations that would make Array<in T>
to be a supertype of Array<T>
where any supertype of T
can be safely passed (usually that means for write purposes), accordingly Array<out T>
to be a super type of Array<T>
where any subtype of T
can be safely passed (usually for read purposes).
@Igorbek Ah, okay. That one is just nice to have though, I don't think anyone really cares about using array indexing contravariantly. You could just bake it into the invariant Array
type. E.g. in .NET you have the IEnumerable
covariant interface implemented by lists, but they didn't bother making a separate contravariant interface for assignment.
Your approach is probably the most pragmatic, but I would still really like contra and covariant type annotations to be owned by whoever defines the interface, rather than whoever consumes it. From my pattern of coding at least, having to constantly remember and specify the modifiers where I'm consuming the code would not help me catch bugs. YMMV.
Another completely unsafe example:
interface MyComponent<T> {
items: T[]
selectedItem: T
onChange: Handler<T>
}
type Handler<T> = (val: T) => void
function onChange(item: string) {
console.log(item.slice(2))
}
const comp: MyComponent<string | null> = {
items: ['a', 'b', 'c'],
selectedItem: null,
onChange
}
Everything compiles fine but the onChange function can actually get passed a null reference, and that will throw at runtime.
With the available tools present in the language, we are nearly to the point where having functions default to contravariant will accomplish most usage scenarios:
declare func((x: string) => void);
func(() => { }); // Valid
func((x: 'foo') => { }); // Compiler error
// Want a covariant function? Generics will let you write one
declare func2<S extends string>((x: S) => void);
func(() => { }); // Invalid
func2((x: 'foo') => { }); // Valid
// How about a function that can operate on an array of any type of Animal?
declare function sortByWeight<A extends Animal>(animals: Array<A>): void;
Edit: adjusted last examples.
Regarding the argument that co-/contravariance are hard-to-grasp concepts - I think this is irrelevant considering that interface assignment compatibility tracks co-/contravariance even without annotations, and therefore in order for a person to understand what's going on, they need to understand the co-/contravariance (though I think it's covariance vs bivariance currently) distinction anyway.
interface Setter<T> {
set(value: T);
}
let sA1: Setter<{}> = { set() {} };
const sA2: Setter<number> = sA1;
let sB1: Setter<number> = { set(value: number) {} };
const sB2: Setter<{}> = sB1; // succeeds due to parameter bivariance
interface Getter<T> {
get(): T;
}
let gA1: Getter<{}> = { get() { return {}; } };
const gA2: Getter<number> = gA1; // Type '{}' is not assignable to type 'number'.
let gB1: Getter<number> = { get() { return 1; } };
const gB2: Getter<{}> = gB1; // succeeds due to covariant returns
For a long time, I didn't understand that TypeScript tracks the generic variance, and therefore the distinction between Promise
EDIT: Or rather, I understand now that structural typing results in effects as if TypeScript tracked variance across interface boundaries.
With the recent addition of --strictFunctionTypes
co- and contra-variance are now supported on function types without function types. This was a major step up and we had to fix dozens of actual issues in our code (which is awesome!).
Are there any plans to also support co- and contra-variance for other types? (as reported in this issue)? Or nothing on the roadmap?
@nicojs the original proposal seemed to suggest adding co/contra variant annotations as a means of opting out of unsoundness on a per type basis. With --strictFunctionalTypes
, that variance of a type will now be inferred correctly for other types... my covariant "interface Stream<T> { next(): T; }
" behaves covariant with respect to T
, and my contravariant "interface Bin<T> { put(item: T); }
" behaves contravariant with respect to T.
Could you clarify, are you asking if there are any remaining soundness holes, or asking for an explicit way to declare variance (in addition to the inferred behavior)?
The proposal initially discussed here and later outlined primarily in #10717 is mostly concerned of use-site covariance annotations. The motivation use cases are still not addressed even with recent enstricten rules.
The simplest example is an array type which can be used both co- and contravariantly. With the proposal an array of type Array<out T>
can only be used where type T
in covariant positions (getters, methods like pop
). Similarly Array<in T>
can be used where type T
in contravariant positions (setters, methods like push
). Note, in general, Array<out T>
is not the same as ReadonlyArray<T>
.
@Igorbek Could you elaborate on how Array<out T>
would be different from Readonly<Array<T>>
(not ReadonlyArray<T>
- I'm not including methods here) or Array<in T>
from a theoretical Writeonly<Array<T>>
(assuming some Writeonly
analogue to the built-in type Readonly<T> = {+readonly [P in keyof T]: T[P]}
)?
@isiahmeadows consider Array
's methods like pop
, reverse
, shift
, sort
, and many others. Although they are modifying array and are excluded from ReadonlyArray<T>
, the type T
there is in covariant position and therefore is part of Array<out T>
.
@Igorbek Re-read my comment. You missed my nuance between Readonly< Array< T > >
and ReadonlyArray< T >
(spaces here added for emphasis).
ah, sorry @isiahmeadows I thought you were asking in the context of my previous note about their differences.
In fact, Readonly<Array<T>>
is even less covariant in respect to T
than ReadonlyArray<T>
.
Assuming it is defined as type Readonly<T> = { readonly [P in keyof T]: T[P]; }
, it only transforms own fields (not even methods) to become read-only.
Actually, TS does not currently make Readonly<T[]>
be actually read-only:
function test<T>(a: Readonly<T[]>, v: T) {
a[0] = v; // no error
}
In general, generic type variance has nothing to do with read/write-ability. A simple test would be:
interface X<T> {
value: T; // read-write field with T in covariant position for reads and in contravariant position for writes
set: (value: T) => void; // read-write field with T in contravariant position
}
type Readonly<X<T>> = { // effectively equivalent to this
readonly value: T; // this is now covariant in respect to T
readonly set: (value: T) => void; // read field with T in contravariant position
}
type X<out T> = {
readonly value T; // same as Readonly
writeonly set: (value: T) => void; // see the difference
}
type X<in T> = {
writeonly value T;
readonly set: (value: T) => void;
}
And a final thing, even if in many respects having a readonly version is mostly what you want, a simple Readonly<T>
cannot deal with multiple type arguments. Like imaging Processor<TIn, TOut>
is contravariant in respect to TIn
and covariant in respect to TOut
, and therefore cannot be expressed with Readonly
/Writeonly
as it have mixed type arguments' variance.
Okay, I'd find a[i] = 0
succeeding when a: Readonly<number[]>
to be a bug.
Okay, I'd find
a[i] = 0
succeeding whena: Readonly<number[]>
to be a bug.
being fixed in #29435
In the case of arrays, why not prevent aliasing assignments that aren't invariant without an explicit cast?
class Animal {}
class Dog extends Animal { bark() { } }
class Cat extends Animal { purr() { } }
const cats: Array<Cat> = [];
const goodAnimals: Array<Animal> = [new Cat, new Cat];
const badAnimals: Array<Animal> = cats; // why not make this an error?
goodAnimals.push(new Dog); // okay
badAnimals.push(new Dog); // hard to prevent, how do you tell goodAnimals from badAnimals?
cats.forEach(cat => cat.purr()); // oops!
Given TypeScript's support for generics, when are covariant arrays actually useful?
(It's a question as well as a suggestion)
Update: a proposal #10717
I've supposed that for structural type system as TypeScript is, type variance isn't applicable since type-compatibility is checked by use.
But when I had read @RyanCavanaugh 's TypeScript 1.4 sneak peek (specifically 'Stricter Generics' section) I realized that there's some lack in this direction by design or implementation.
I wondered this code is compiled:
More clear code:
How could
B[]
be assignable toA[]
if at least on memberpush
is not compatible. ForB[].push
it expects parameters of typeB
, butA[].push
expectsA
and it's valid to call it withA
.To illustrate:
Do I understand it correctly that is by design? I don't think it can be called type-safe.
Actually, such restriction that could make
B[]
to be unassignable toA[]
isn't desirable. To solve it I suggest to introduce variance on some level (variable/parameter, type?).Syntax
I'm not sure where variance should be applied - to variable or type? Looks like it closer to variable it self, so the syntax could be like:
Questions to clarification
in out
(fixed type)?in
norout
(open for upcast/downcast)?So this topic is a discussion point.