CarliJoy / intersection_examples

Python Typing Intersection examples
MIT License
33 stars 2 forks source link

An issue with the accepted definition of Union #22

Open mniip opened 1 year ago

mniip commented 1 year ago

The text in PEP 483 that describes how unions behave actually doesn't do a very good job as a definition of a union type.

  • Union[t1, t2, …]. Types that are subtype of at least one of t1 etc. are subtypes of this.
    • Unions whose components are all subtypes of t1 etc. are subtypes of this. Example: Union[int, str] is a subtype of Union[int, float, str].
    • The order of the arguments doesn’t matter. Example: Union[int, str] == Union[str, int].
    • If ti is itself a Union the result is flattened. Example: Union[int, Union[float, str]] == Union[int, float, str].
    • If ti and tj have a subtype relationship, the less specific type survives. Example: Union[Employee, Manager] == Union[Employee].
    • Union[t1] returns just t1. Union[] is illegal, so is Union[()]
    • Corollary: Union[..., object, ...] returns object.

This lists some facts that are indeed true if we were to interpret Union[A, B] as a static type that stands for a set of values that is a set-theoretic union of the sets that A and B stand for. However this does not actually define Union[A, B] to be that set.

Many have interpreted the first line: "Types that are subtype of at least one of t1 etc. are subtypes of this." as a definition:

C <: A | B if and only if C <: A or C <: B

However this is now how set-theoretic unions work, it is not in general true that $C \subseteq A \cup B \iff (C \subseteq A) \lor (C \subseteq B)$

Counterexample: ```py >>> A = {1} >>> B = {2} >>> C = {1, 2} >>> C.issubset(A | B) == (C.issubset(A) or C.issubset(B)) False ```

What people probably are thinking of is the formula $c \in A \cup B \iff (c \in A \lor c \in B)$, which is actually true. But we cannot use this as a definition because that involves checking potentially infinitely many $c$. Instead we want a definition in terms of the <: aka $\subseteq$ relation.

This is a solved problem and I'll cut to the chase: the formulas that define set-theoretic unions and intersections are as follows: $$C \subseteq A \cap B \iff (C \subseteq A \land C \subseteq B)$$ $$C \supseteq A \cup B \iff (C \supseteq A \land C \supseteq B)$$

The duality between intersection and union lies in reversing the subset relation, not in replacing an "and" with an "or". In fact it is a well understood idea that transitive relations play extremely poorly with "or", and extremely well with "and". Hence the above definitions both using "and".

What about $A \cap B \subseteq ...$ and $... \subseteq A \cup B$? We have these formulas, as corollaries of the definitions above: $$(A \subseteq D \lor B \subseteq D) \implies A \cap B \subseteq D$$ $$(D \subseteq A \lor D \subseteq B) \implies D \subseteq A \cup B$$

Proof - Substitute $C = A \cap B$ into the first equation. LHS is true by reflexivity, so RHS must be true too. If we have $A \subseteq D$ then by transitivity $A \cap B \subseteq A \subseteq D$. Likewise if we have $B \subseteq D$. So in total if we have either $A \subseteq D$ or $B \subseteq D$ then we have $A \cap B \subseteq D$. - Same proof but with all relations reversed. Substitute $C = A \cup B$ into the second equation, RHS must be true. If $D \subseteq A$ then by reflexivity $D \subseteq A \subseteq A \cup B$. Same for $B$.

Note that these are one-way implications. Sufficient but not necessary conditions. There's once again a symmetry with reversing the subset relation, but no symmetry between "and" and "or".

In #1, here, @erictraut lists all 4 of these on equal footing, labelled "non-controversial". I want to emphasize that two of these involving "and" are actually definitions by universal property, and the two involving "or" are special cases/corollaries.

What does the PEP-483 paragraph actually define? Most of the bullet list is actually describing a reduction procedure for type expressions involving class literals and unions. The proposed algorithm of checking `C | D <: A | B` with `(C <: A or C <: B) and (D <: A or D <: B)` only works exactly because each of `A`, `B`, `C`, `D` can only be a class, as the only other possible form - union - gets flattened first. However in presence of intersections this algorithm already fails: `A & (B | C) <: (A & B) | (A & C)` vs `A & (B | C) <: A & B or A & (B | C) <: A & C` I'll also cut to the chase here: you want to reduce the left hand side to a *disjunctive* normal form, and the right hand side to a *conjunctive* normal form. The `|` on the left and the `&` on the right can then be taken apart. But the `&` on the left and the `|` on the right cannot. Problems of the kind `t1 & ... <: s1 | ...` where `t1`, ..., `s1`, ... are class literals -- are in general irreducible and will require knowledge of the inheritance tree and whether a multiple inheritance child class exists/can be constructed.
randolf-scholz commented 1 year ago

@mniip

What people probably are thinking of is the formula $c \in A \cup B \iff (c \in A \lor c \in B)$, which is actually true. But we cannot use this as a definition because that involves checking potentially infinitely many $c$.

Why is it unsafe to assume there are only finitely many c? Seems like a fine assumption to make to me. Mathematically, it may be simpler to prove certain things with the infinity assumption, but in principle one should be able to restrict the theory to finite cases as we want to reason about real-world computer code, where only ever finitely many types and runtime values can exist.

However in presence of intersections this algorithm already fails: A & (B | C) <: (A & B) | (A & C) vs A & (B | C) <: A & B or A & (B | C) <: A & C [...] Problems of the kind t1 & ... <: s1 | ... where t1, ..., s1, ... are class literals -- are in general irreducible and will require knowledge of the inheritance tree and whether a multiple inheritance child class exists/can be constructed.

Can you please expand a bit why it is necessary to know whether a multiple inheritance child class exists/can be constructed? Why can't we just treat A&B like an abstract structural type (just like we do with A|B)? Whether *actual* types exist that are subtypes of A&B should be irrelevant when testing subtype relationships.

NeilGirdhar commented 1 year ago

Why is it unsafe to assume there are only finitely many c?

I think you misunderstood that part. He's just saying that the type of c may not some single class, but could be a swath of an infinite number of possible types—too numerous to check. Therefore the definitions of union and intersection should be with respect to sets of types rather than individual types.

I do think we should lift this entire theory section into the PEP though. What do you think?

Can you please expand a bit why it is necessary to know "whether a multiple inheritance child class exists/can be constructed"?

I wondered the same thing. Anyway, this should be discussed in #5

NeilGirdhar commented 1 year ago

@mniip Do you have time to replace the section on "Theory" in specifications.rst with something along the lines of what you wrote above?

mniip commented 1 year ago

@NeilGirdhar

I think you misunderstood that part. He's just saying that the type of c may not some single class, but could be a swath of an infinite number of possible types—too numerous to check. Therefore the definitions of union and intersection should be with respect to sets of types rather than individual types.

I was talking about how a static type is a set of values. $c$ is supposed to range over values here. We do have static types that are narrower than a runtime class (e.g. Literal or list[int]).

@randolf-scholz

in principle one should be able to restrict the theory to finite cases as we want to reason about real-world computer code, where only ever finitely many types and runtime values can exist.

Let's simplify to int. Are you arguing that because we store the number of digits in a machine word, we can only represent finitely many integers in int, up to 2^(2^64) or so? I think it helps to have at least a pretense of turing completeness here by abstracting from specific hardware and a specific implementation of python, such that this bound could be made arbitrarily high.

Even without this, it's simply impractical to test this many values.

Can you please expand a bit why it is necessary to know whether a multiple inheritance child class exists/can be constructed?

There are a couple of fundamental type operations that become nontrivial in presence of intersection types. Given some function type F, we are interested in the following:

The computation of the result type requires external knowledge about which conjunctions of types are satisfiable (i.e. are a non-empty type). The more precisely we can answer that question the narrower our result types become. For example givenF = (A -> B) & (C -> D) we have dom F = A | C. But the computation of F ° A depends on the relationship between A and C:

randolf-scholz commented 1 year ago

@mniip I recognize that this is a wall of text, so bear with me.

Are you arguing that because we store the number of digits in a machine word, we can only represent finitely many integers in int, up to 2^(2^64) or so?

I apologize for the unclear formulation earlier. The way it appears to me is the following

  1. We have a set S = {"abcd", 123, [object()], 42.123, …} of all possible runtime values. We can then say the type list corresponds to the subset $S_\text{list}⊆S$ all possible list objects (or subtypes thereof) that could be created at runtime.
    • Then, for a given runtime value X we have type[X] <: list ⟺ $X∈S_\text{list}$
    • Note that this presumes that we know how to determine whether something is a list object, otherwise we can't construct $S_\text{list}$, so it's not really a definition but rather a reformulation of the subtype problem as a subset-problem.
    • Of course, it doesn't make sense to test whether an object x is an 64-bit integer by constructing the set of all 64-bit integers and then looking if x is in this set. To construct this set in the first place, you'd had to have some exogenous criterion of what a 64-bit integer is, and this same criterion can be used to determine whether x is a 64-bit integer.
    • Of course, the reformulation may still be useful for reasoning about abstract constructs like A | B and A & B
  2. We have a set T = {object, set, int, float, Hashable, …} of all possible types that can be created. This set can be equipped with a partial order according to the subtype relationship, essentially giving rise to an ordered graph $G_T$ with object at the top and Never at the bottom.
    • When we do static analysis, we can assume that T starts with a small, finite set of builtin-types and then gets larger as more types are constructed. Note that static analysis doesn't allow us to reason about dynamically created types. What I am saying we can assume that the set of types stays finite because otherwise it implies the program never halts.
    • For a given type $t∈T$ we can construct the set of all subtypes of $T$, call it $T_{<:t} ≔ \{t'∈T∣t'<:t\}$ giving rise to the induced set $ST ≔ \{ T{<:t} ∣ t∈T \} = \{ \{t'∈T∣t'<:t\} ∣ t∈T \}$. Intersections and unions can be described in this context using the element-based formula: x <: a | bx∈T_{<: a|b}x ∈ T_{<:a} ∨ x∈T_{<:b}
    • Just like ①, this is a tautological reformulation of the problem
    • Of course, the reformulation may still be useful for reasoning about abstract constructs like A | B and A & B
  3. So given that ① and ② are tautological reformulations of A <: B, what is the definition of A <: B? Python seems to allow multiple things
    1. Nominal subtyping (≤ɴᴏᴍ). B ≤ɴᴏᴍ A if and only if B directly or indirectly inherits from A. The code class B(A) creates a type that is always a nominal subtype of A. A nominal subtype should also be a structural subtype (X ≤ɴᴏᴍ Y ⟹ X ≤ꜱᴛʀ Y), but errors arising from violations of this are restricted to the class-body of the subclass. After the subclass is defined, it is always considered a nominal subtype of its parents/ancestors.
    2. Gradual/Consistent subtyping (≤ɢʀᴅ). There is a special type Any that lives outside the regular subtype graph and satisfies special rules. Contrary to the other two, gradual subtyping is not transitive: Any ≤ɢʀᴅ B ≤ɢʀᴅ Any.
    3. Structural subtyping (≤ꜱᴛʀ). B is a structural subtype of A if the interface of B is compatible with the interface of A, meaning that B ≤ꜱᴛʀ A if and only if for all attributes/methods foo of A holds that B.foo <: A.foo. (B.foo ≤ɴᴏᴍ A.foo or B.foo ≤ꜱᴛʀ A.foo or B.foo ≤ɢʀᴅ A.foo), or equality for mutable attributes if one wants to be strict.

In summary: B <: AB ≤ɴᴏᴍ A or B ≤ɢʀᴅ A or B ≤ꜱᴛʀ A.

Therefore, what we need to do in the PEP is specify:

  1. Define when C ≤ɴᴏᴍ (A&B) and when (A&B) ≤ɴᴏᴍ C. My suggestion: C ≤ɴᴏᴍ (A&B) if C inherits from both A and B, and (A&B) ≤ɴᴏᴍ C never, as we can treat A&B as a structural type.
  2. Define when C ≤ɢʀᴅ (A&B) and when (A&B) ≤ɢʀᴅ C. Suggestion: treat Any & X as irreducible and use the structural definition.
  3. Define when C ≤ꜱᴛʀ (A&B) and when (A&B) ≤ꜱᴛʀ C. Suggestion: C ≤ꜱᴛʀ (A&B) if and only if (C ≤ꜱᴛʀ A) and (C ≤ꜱᴛʀ B) and (A&B) ≤ꜱᴛʀ C if and only if (A ≤ꜱᴛʀ C) or (B ≤ꜱᴛʀ C).

Safe applicative domain dom F computes the widest set of arguments (i.e. a type) with which a function can be called.

While I agree that this would be very nice information to have in some cases, even right now dom F is only implicitly known to type-checkers due to overloads. Safe application is tested on a per-case basis, i.e. when F(<args>) appears in code, the type-checker will simply traverse the list of overloads and see if any match <args>, and generally pick the first match to determine the return type.

So if F = [(T1 -> R1), (T2 -> R2), (T3 -> R3), ...], where [f1, f2, f3, ...] is the ordered list of overloads, the type checker doesn't need to know that dom F = T1 | T2 | T3 | ..., because it can determine if X <: dom F by simply sequentially testing whether X <: T1, X <: T2, X <: T3, etc. Then X <: dom F if and only if a match is found.

The computation of the result type requires external knowledge about which conjunctions of types are satisfiable (i.e. are a non-empty type)

Again, I believe the by far the easiest solution is to treat A&B as a purely structural type. In the example F = (A -> B) & (C -> D), the computation of F ° X would simply be:

  1. If X <: A and X <: C, then B&D
  2. If X <: A and ¬(X <: C), then B
  3. If ¬(X <: B) and X <: C then D
  4. If ¬(X <: B) and ¬(X <: C) then error.

Mathematically, this means the intersection simply is the meet operation from partially ordered sets. In general, in partially ordered sets, the meet may not exist. However, in our case we can guarantee the meet exists, since Never is a universal lower bound, and we can assume the set of types involved in any halting program is always finite.

The only major disagreement at this point in time seems to be whether A&B must be identified with Never if there is any structural incompatibility between A and B. I disagree with this since class C(A, B) would create a nominal subtype of both A and B, hence C ≤ɴᴏᴍ (A&B). At this point, to be logically consistent, one would have to insist that C is also Never, but C is a class with some interface and clearly not Never.

Just like with regular nominal subtyping, if A is incompatible with B this will manifest itself inside the class body of C. In conclusion, if A is incompatible with B then logically we have:

  1. X ≤ꜱᴛʀ (A&B)X=Never.
  2. X ≤ɢʀᴅ (A&B) with X ≠ Never maybe still possible, I am not 100% set on this.
  3. X ≤ɴᴏᴍ (A&B) with X ≠ Never still possible.

Rules (1) and (3) are inconsistent, given the requirement X ≤ɴᴏᴍ Y ⟹ X ≤ꜱᴛʀ Y from before. However, the scope of this error is restricted to the class body of the nominal subtype, just like it is in regular nominal subtyping if you write a class B(A) with an interface incompatible with A; there is nothing special about the intersection in this regard.

randolf-scholz commented 1 year ago

The following example checks in both mypy and pyright and shows that the structural incompatibility only raises errors insider the class body. Afterwards, class B(A) is considered a nominal subtype of A and can be used anywhere A is used.

class A:
    def foo(self) -> int: return 0
class B(A):
    def foo(self) -> str: return "cheat"  # type: ignore[override]

def show(obj: A) -> int:
    return obj.foo()

x: A = B()  # no error
reveal_type(A().foo())  # int
reveal_type(B().foo())  # str
reveal_type(show(B()))  # int, no error since B ≤ₙₒₘ A ⟹ B<:A

https://mypy-play.net/?mypy=latest&python=3.11&gist=89aa3c3f034d0fb9e0be4e0f8b6aa4db

mikeshardmind commented 1 year ago

Any example with a type ignore should not be used to make an argument if the type ignore is used to break the rules. From false assumptions, you can construct any desired result.

randolf-scholz commented 1 year ago

@mikeshardmind That's entirely missing the point. The question is: when a rule is broken, at which point is/should the error raised? Of course, any example demonstrating a broken rule must include a broken rule.

Python has multiple kinds of subtyping relationships: nominal (≤ɴᴏᴍ), structural (≤ꜱᴛʀ) and gradual (≤ɢʀᴅ). When a type-checker tests for subtyping relationship, it tries to validate that

  1. A <: BA ≤ɴᴏᴍ B or A ≤ꜱᴛʀ B or A ≤ɢʀᴅ B
  2. If A ≤ɴᴏᴍ B then A ≤ꜱᴛʀ B

I hope we can agree on the following:

The only thing we disagree on is that you keep insisting that the PEP must mandate that type-checkers must error on using A&B as a type hint. But honestly, this seems out of scope; in order to specify intersections, it should be sufficient to rigorously define when C <: A&B and when A&B <: C. I don't buy this divergence problem until we have some really convincing example of how divergence could occur, given that C <: A&B and A&B <: C are rigorously defined.

On the other hand, your proposal of erroring on A&B would lead to some very real divergences, like isinstance(x, A&B) vs isinstance(x, A) and isinstance(x, B) not being equivalent to a type-checker.

mikeshardmind commented 1 year ago

That's entirely missing the point. The question is: when a rule is broken, at which point is/should the error raised? Of course, any example demonstrating a broken rule must include a broken rule.

It actually isn't. We can't design for what happens if you suppress errors, that's the realm of type checkers.

A & B should be Never if A and B are incompatible. Period. Anything else is incorrect. The idea that you would intentionally specify an impossible intersection rather than the type you constructed and then ignored above is outside of what we can design for because you've started by violating one of the rules.

randolf-scholz commented 1 year ago

It actually isn't. We can't design for what happens if you suppress errors, that's the realm of type checkers. A & B should be Never if A and B are incompatible. Period. Anything else is incorrect.

Missing the point again. If someone defines a class C(A,B), then C is a nominal subtype of both A and B. Period.

What you are implicitly stating is that structural subtyping must take priority over nominal subtyping. However, this is contrary to how type-checkers like mypy or pyright work. Nominal subtyping takes priority over structural subtyping.

Just because someone could not define a structural subtype of A&B, does not imply that no one could define a nominal subtype or a gradual subtype. The nominal subtype would violate the rule "If A ≤ɴᴏᴍ B then A ≤ꜱᴛʀ B", but this is not the same as the nominal subtype not existing.

mikeshardmind commented 1 year ago

Missing the point again. If someone defines a class C(A,B), then C is a nominal subtype of both A and B. Period.

I'm not though, the class C(A, B) is already breaking the rules by existing, and your type ignore in your example is hiding that. We cannot define a set of specifications of correctness under the assumption that people are breaking other rules and supressing the warnings for them, or we will be stuck in an endless set of more and more ridiculous exceptions.

randolf-scholz commented 1 year ago

I'm not though, the class C(A, B) is already breaking the rules by existing

I completely agree, and that's why an error needs to be raised inside the class body of C.

We cannot define a set of specifications of correctness under the assumption that people are breaking other rules and supressing the warnings for them, or we will be stuck in an endless set of more and more ridiculous exceptions.

Rule breaks will inevitably need to happen, Python is too dynamic to be completely statically typed atm. The point we are discussing is if a rule break happens, then at which point a rule break is messaged.

We both agree that a nominal subclass C(A,B) that is not a structural subtype of both A and B is a rule break, and an error needs to be raised.

We do not agree that A&B needs to necessarily raise, because the contraposition is that A&B is a purely abstract type that is completely and uniquely specified by its subtyping rules, just like Union is. It is the greatest lower bound (=meet). If someone writes a function foo(x: A&B), there are no subtyping rules broken here. Subtyping only gets violated once someone tries to feed in an x which is not a subtype of A&B, so this is the point at which an error should be raised.

The contraposition is that it doesn't matter whether A&B is only satisfiable by Never or not at function definition time, because A&B is viewed as the purely abstract greatest lower bound.

mikeshardmind commented 1 year ago

We do not agree that A&B needs to necessarily raise

I said that A& B for incompatible definitions of A and B must reduce to Never. This is different from the expression itself raising

The contraposition is that it doesn't matter whether A&B is only satisfiable by Never or not at function definition time, because A&B is viewed as the purely abstract greatest lower bound.

And that bound is uninhabited, see the very first post in this thread for what that means, along with typing.assert_never showing that it actually isn't just an abstract definition, but something used by both the person calling, and the function itself. other existing typing features need to be able to detect a type being uninhabited, this means A & B where A and B are incompatible must reduce to Never.

The computation of the result type requires external knowledge about which conjunctions of types are satisfiable (i.e. are a non-empty type). The more precisely we can answer that question the narrower our result types become. For example givenF = (A -> B) & (C -> D) we have dom F = A | C. But the computation of F ° A depends on the relationship between A and C:

randolf-scholz commented 1 year ago

this means A & B where A and B are incompatible must reduce to Never.

From a structural point of view, yes, but there is no necessity to do this reduction since anyone who passes in anything other than Never must have made a structural error earlier on. Again, the whole point here is when is the right place and time to detect structural errors.

Also, secondly, I am not sure we agree on what it means for A & B to be structurally incompatible, so maybe we should clarify that. My thoughts can be summarized with the following table, which indicates the type of (A&B).foo if the condition is satisfied, otherwise A and B are structurally incompatible.

A.foo compatible with B.foo ? A contravariant in foo A invariant in foo A covariant in foo
B contravariant in foo ≥ (A.foo | B.foo) == A.foo (if A.foo ≥ B.foo) == T (if B.foo ≤ T ≤ A.foo)
B invariant in foo == B.foo (if B.foo ≥ A.foo) == A.foo (if A.foo == B.foo) == B.foo (if B.foo ≤ A.foo)
B covariant in foo == T (if A.foo ≤ T ≤ B.foo) == A.foo (if A.foo ≤ B.foo) ≤ (A.foo & B.foo)

Notes:

mikeshardmind commented 1 year ago

@randolf-scholz I was quite clear with my meaning, and you seemed well aware of it in the other thread where you tried to use a type ignore to circumvent it. If it is impossible for a type to exist that satisfies all components of an intersection, the intersection must resolve to Never. This is a discussion better had in the other thread.

CarliJoy commented 1 year ago

Again, the whole point here is when is the right place and time to detect structural errors.

This is nothing that this PEP needs or should define, but it is up to the type checker maintainers to decide this. We define should focus on defining a proper set of rules that's it.

We won't be able to define when and how to detect problems for type checkers. The only thing we could do is write something like:

We would like to encourage type checkers to show end users MSG A in case of B.

But nothing more.