Closed mikeshardmind closed 1 year ago
I don't consider the current behavior a "bug" in the sense that it is working as designed. I've changed this to an "enhancement request", and I'll consider it in that light.
A few thoughts...
In "basic" type checking mode (i.e. with default settings), pyright doesn't report any type inconsistencies for subclass overrides. These checks (reportIncompatibleMethodOverride
and reportIncompatibleVariableOverride
) can be enabled manually. They are also enabled when using the "strict" type checking mode.
Relatively few pyright users enable strict type checking mode. I don't have exact numbers because pyright is open source and doesn't collect telemetry, but we can estimate the numbers from pylance (which builds upon pyright and does include telemetry). Based on pylance telemetry data, we can assume that somewhere around 5% of pyright users enable strict mode. The other 95% get sufficient value out of basic-mode checks and find the strict-mode checks too noisy.
I agree that from a type theory standpoint, enforcing override invariance for mutable class-scoped variables is correct. From a practical standpoint, I'm not sure. It's not uncommon for Python code to override mutable class-scoped variables with a narrower (covariant) type. While this is theoretically unsafe, in practice most code that does this is written in a way where it results in no problems. When considering enhancement requests, I need to weigh benefits versus drawbacks. The benefit that I see here is that the small number of developers (yourself among them) who want the absolute strictest of type enforcement will have strong guarantees that their code has no bugs related to this theoretical typing hole. The drawback is that this will potentially produce a significant number of false positives in existing code that uses pyright's strict-mode type checking.
TypeScript doesn't enforce invariance for mutable class members. If it were to enforce this, my TypeScript code would be filled with errors. In my many years of writing and debugging production TypeScript and Python code, I don't recall ever having encountered an actual bug that resulted from this typing hole. That means the TypeScript team's choice not to enforce this was arguably a reasonable one. The same might be said of mypy.
To quote the lead developer on the TypeScript team: "100% [type] soundness is not a design goal [of TypeScript]. Soundness, usability, and complexity form a trade-off triangle. There's no such thing as a sound, simple, useful type system." I strongly agree with this sentiment. I know that you value soundness over simplicity and usefulness, but most users of pyright (and the other Python type checkers) prefer that we strike a good balance between these three.
I've prototyped your proposed enhancement and used mypy_primer to run pyright on roughly 25 typed code bases. Here are the results: https://github.com/microsoft/pyright/pull/5934. I also ran it on some private code bases and got similar results. This isn't as noisy as I feared it would be, but it's still noisy enough that this change gives me pause. I'll need to give this more thought before deciding what to do.
I'm interested in your thoughts about the above points.
I'm also interested to hear how maintainers of other Python type checkers respond to the corresponding issues in their issue trackers. For example, if mypy were to make this change, I would be more inclined to do so in pyright. As you've pointed out in other threads, there's value to the Python community if static analysis tools work consistently.
I agree that from a type theory standpoint, enforcing override invariance for mutable class-scoped variables is correct. From a practical standpoint, I'm not sure. It's not uncommon for Python code to override mutable class-scoped variables with a narrower (covariant) type. While this is theoretically unsafe, in practice most code that does this is written in a way where it results in no problems. When considering enhancement requests, I need to weigh benefits versus drawbacks. The benefit that I see here is that the small number of developers (yourself among them) who want the absolute strictest of type enforcement will have strong guarantees that their code has no bugs related to this theoretical typing hole. The drawback is that this will potentially produce a significant number of false positives in existing code that uses pyright's strict-mode type checking.
Starting with this, intersections (future) are significantly simpler to correctly specify if this hole is fixed. Subclassing and what is "Correct" is relatively fundamental to the type system. The impacts on this being correct are further reaching than if someone wanted something seen as more practical, but limited in scope, such as tagged unions. (edit: actually, fixing this gives us implicitly safe with no extra syntax needed tagged unions as well)
As for minimal impact, the correct way to handle this is a generic, not subclassing with override, so there's a correct way to do it that shouldn't be unfamiliar to typing users
In "basic" type checking mode (i.e. with default settings), pyright doesn't report any type inconsistencies for subclass overrides. These checks (reportIncompatibleMethodOverride and reportIncompatibleVariableOverride) can be enabled manually. They are also enabled when using the "strict" type checking mode.
I actually saw this when I saw the PR and went to look at what settings it was enabled under by default. This surprised me. I think this should be on in basic, again, subclassing is very fundamental to python's type system, and improperly subclassing breaks other things like the expected behavior of functions accepting a specific type. It also could break library code that gives users a mixin and then allows them to call functions that expect a class with that mixin.
Relatively few pyright users enable strict type checking mode. I don't have exact numbers because pyright is open source and doesn't collect telemetry, but we can estimate the numbers from pylance (which builds upon pyright and does include telemetry). Based on pylance telemetry data, we can assume that somewhere around 5% of pyright users enable strict mode. The other 95% get sufficient value out of basic-mode checks and find the strict-mode checks too noisy.
Strict can be noisy in some code bases. It's my hope that as the type system gets more capable, people find they have what they need to express the types they actually have rather than feeling like they are fighting the type system, but the type system being more capable is really predicated on it being built up, and to me, that seems predicated on having a solid foundation which I would consider this a part of.
TypeScript doesn't enforce invariance for mutable class members. If it were to enforce this, my TypeScript code would be filled with errors. In my many years of writing and debugging production TypeScript and Python code, I don't recall ever having encountered an actual bug that resulted from this typing hole. That means the TypeScript team's choice not to enforce this was arguably a reasonable one. The same might be said of mypy.
Python is only even in use at my work because of static analysis and strict type checking, and it's currently being reconsidered due to various recent discoveries of holes in the type system (which existed before but were overlooked or not discovered during initial decision-making). I've been making an attempt to get in front of this and make the claim that we can either get the holes that are viewed as too egregious fixed, or at least checked for in our own way locally that covers the requirements, avoiding a language rewrite for the projects which have rules attached to them relevant here. What some people view as only enabling tooling, others are meeting red-tape requirements with, and you're unlikely to hear much in the way of specifics from those people. (I've said more than I'm comfortable with already despite taking the time to be sure I have not said too much). TypeScript allows many things that rules it's use out entirely.
This isn't as noisy as I feared it would be
I should hope not. I don't think most people override a base class creating an incompatibility without it being either a mistake or working around some other shortcoming in the type system. It was surprising to me that this was even allowed and not behavior which was specific to a poor definition with Never when the original discussion that sparked it happened because of the fact that I'd never use inheritance this way to begin with. This is an obvious case for generics or for functional code rather than object-oriented, depending on the problem space.
The noisiest thing I saw in there was with the steam library, which was arguably only designed this way because of a lack of tagged unions (though @gobot1234 might have more to say here) This seems to indicate that fixing this will help better identify shortcomings in the type system and that we should provide more ergonomic solutions for developers for that are founded on a solid, correct, base for cases like this.
For example, if mypy were to https://github.com/python/mypy/issues/3208#issuecomment-1716378052, I would be more inclined to do so in pyright. As you've pointed out in other threads, there's value to the Python community if static analysis tools work consistently.
I view this as an inappropriate inversion of my argument in the other threads. This argues to do what is incorrect on the basis that everyone is doing it. If the argument is to keep the current behavior, it should be specified as an intentional exception, not kept merely because the first type checker had a bug (and they have classified it as a bug) and all the other implementations matched it.
My argument is that consistency should come from specification to specifically have a source of truth not subject to implementation error, and that implementations would be naturally consistent by adherence to such.
I agree that from a type theory standpoint, enforcing override invariance for mutable class-scoped variables is correct. From a practical standpoint, I'm not sure. It's not uncommon for Python code to override mutable class-scoped variables with a narrower (covariant) type. While this is theoretically unsafe, in practice most code that does this is written in a way where it results in no problems.
And there's an obvious case where type checkers already do the right thing here. The concrete list vs the protocol sequence. I would argue this to be significantly more of the "Average non-type checked code use" than those defining classes and then also having a case where they don't make the class generic and do subclass to change it. If most code is written in a way where changing this has no negative impact, we should look at why the code that exists that it does impact is written this way. (see above, where the steam wrapper appears to be doing this in lieu of having tagged unions)
I don't have an opinion on whether this should be checked unconditionally but at least having the option would be nice (same goes for other typing holes).
I'd be surprised to have this checked under basic
given the number of rules currently off in basic
. There are more basic holes (strictDictionaryInference
) that are fine in basic.
I'd be fine to have this in strict
. With respect to strict rules there are several I'd consider much more difficult to satisfy for most codebases (unknown rules). For override rules this strikes me as similar to property vs attribute distinction strictness. I'll occasionally get errors when I have a protocol define a field as an attribute, but implementing type used a property. It's kind of rare that error caught was real issue, but it's easy to correct. My opinion here is weak mainly as I can't remember a bug this would have caught, but I'd probably keep it enabled.
There are pragmatic reasons for this to be a rule that is on in basic. If this is always checked, then it becomes safe to narrow unions based on the type of attributes (tagged unions) without any special additional syntax for it. This attribute is disjoint and invariant across all members of the Union? Simply knowing the attribute type also gives the parent type. in the case of it only differing between some members, this still would allow narrowing to those members safely.
There's evidence this is something that is wanted in the ecosystem, Pydantic currently is working around this hole in the type system by bolting it on with Annotated, where the user has to specify a field that it is safe to treat this way, which would not be necessary if this rule was properly enforced:
TaggedEnum = Annotated[Union[…], Field(discriminator="field_name")]
cattrs, a library in a similar space just assumes this rule applies even though it isn't something type checkers check and can handle disjoint types of inner attributes in multiple-type recursive (de)serialization.
And this is (re) coming to light from the original mypy issue in 2017 from the complications not having this rule does to what rules we would need for intersections.
It's things like this that lead me to value correctness as a first priority. There are clearly negative side-effects by this not being followed where people then feel the need to bolt on additional things to compensate. While it is sometimes easy to see a currently known positive, it is almost impossible to correctly predict all the long-term side effects of not following rules and how it can interact with future things people want to accomplish with the type system.
To go back to the quote from type script: "100% [type] soundness is not a design goal [of TypeScript]. Soundness, usability, and complexity form a trade-off triangle. There's no such thing as a sound, simple, useful type system."
I disagree that it is impossible to make a type system that is fully correct (sound) and which is simple (simple varies, but simplicity should match the complexity of the underlying task and no more), and useful. I think simply not adding higher kinded types, explicitly, accomplishes this (see Roc lang's FAQ on their decision here) A type system doesn't need to support everything, but everything it does support it should do so correctly. That is, only take on complexity in the type system as there is a demand for it and ensure it is design-scoped in a way that only those who need the complexity interact with it.
Where people are currently "fighting" the type system should not be used to say the type system should be more flexible and not follow basic principles (which is in direct opposition to the soundness people want by adding typing), it should be used to show there's things the type system should gain ways to understand the intent of developers. (improving ergonomics of specific cases) Approaching it from this angle, starting with what is correct and adding ways for developers to ergonomically and correctly express their intent, is how one could build a system that remains coherent no matter what is added over time.
@mikeshardmind, I think it's safe to say that your requirements and expectations for type checking differ significantly from the vast majority of Python developers.
If you're writing code for production systems that must be provably correct (utilizing formal methods or similar techniques), then Python is probably a poor language choice. I've implemented systems like this, and it would never have occurred to me to use Python for these systems. Even if the Python static type system were provably 100% sound (and it's clearly not — and is unlikely to become so), the Python runtime is full of exceptions, special cases, unexpected behaviors, and other complexities that were added for usability and backward compatibility reasons. And most Python libraries are not built with provable correctness (or even clearly-defined interface contracts) in mind. So perhaps this comes down to a mismatch between your requirements and the language you're using. It's interesting that you readily ruled out TypeScript but not Python. I think of them as similar, although in my personal experience, I find that the code I write in TypeScript is much more robust and bug-free than the code I write in Python.
Even if Python is an appropriate language for your use case, you're going to find that your requirements and expectations are pretty disjoint from mainstream Python developers. That could explain the apparent "impedance mismatch" that you've observed in recent discussions with the broader Python typing community. Most members of this community are focused on serving the needs of mainstream Python developers, and we prioritize our investments accordingly.
There are on the order of 10M active Python developers in the world today, and that number is growing year over year. The vast majority of these developers (90%+) do not enable any form of static type checking. The majority of Python developers do benefit from static type analysis by way of language server features (completion suggestions, semantic highlighting, semantic-aware searches, refactoring tools, etc.), but their code is still full of bugs that could easily be prevented if they simply enabled static type checking.
Why don't more developers enable type checking? This is partly explained by lack of knowledge (how to enable it, what benefits it confers, basic syntax for type annotations, etc.). Most Python developers have no desire to learn about generics, type theory, variance, type narrowing, overloads, or other advanced concepts in the Python type system. This is why simplicity and usability are so important.
Another big reason is that developers who try to enable type checking get frustrated quickly when a tool reports errors that they perceive to be false positives. It's great if a tool can tell you about a potential bug, but if you need to deal with (suppress, work around, or ignore) multiple false positive errors for every actual bug, you will be inclined to give up on type checking. For this reason, we're very conservative about which diagnostic rules we enable by default in pyright's "basic" type checking mode. If we enable rules that result in false positive errors in typical Python code, it will turn users away from static type checking.
I would consider it a big step forward if we could increase the percentage of Python developers who enable basic type checking by even 1%. Think of the number of bugs that could be prevented (and coding efficiencies gained) if 100K more Python developers started to use static type checking on a daily basis. The impact would be huge. This is the lens by which we decide which diagnostics to include in the "basic" type checking rule set. With this understanding, it doesn't make sense to include reportIncompatibleMethodOverride
and reportIncompatibleVariableOverride
in basic type checking. Common coding patterns in Python violate these rules. I recognize that it's not technically type safe, but that doesn't mean such code has a bug; it means that the code is somewhat fragile and has the potential for a bug. If we were to include these diagnostic checks in basic type checking, it would work against the goal of encouraging more Python developers to enable basic type checking.
The "strict" rule set in pyright is targeted at users who are already bought in to the benefits of static type checking and are more willing to make accommodations in their code. This allows them to prevent classes of bugs that are less common and more subtle. For "production" uses of Python, I recommend enabling at least some, if not all, of the strict-mode diagnostic rules. That includes reportIncompatibleMethodOverride
and reportIncompatibleVariableOverride
. But keep in mind that most Python developers are not writing "production" Python code. For non-production Python code, basic type checking mode is sufficient — and still confers significant benefits over no type checking.
@amacfie said:
I don't have an opinion on whether this should be checked unconditionally but at least having the option would be nice
If this check isn't included in the "strict" rule set, then it's unlikely we'd provide a separate rule to check for it. Pyright currently has nine rules that are implemented but not enabled in strict mode. You can see these at the bottom of this table. These rules tend to get very little usage because they are difficult to discover, and many of them require the adoption of coding practices that most Python developers (even those who are writing production code) find too constraining. If a diagnostic rule is too esoteric or disruptive to enable in "strict" mode and is unlikely to be discovered or used by all but a handful of pyright users, it's hard to justify implementing and maintaining it. For that reason, we generally don't add new diagnostic rules unless they are enabled in "strict" mode.
If there is sufficient demand, we could consider adding a "stricter" mode that includes a superset of the "strict" mode diagnostic rules.
@mikeshardmind said:
If this is always checked, then it becomes safe to narrow unions based on the type of attributes (tagged unions) without any special additional syntax for it.
Both pyright and mypy already support type narrowing for tagged unions in the case where the field uses a literal type.
@dataclass
class Cat:
pet_type: Literal["cat"]
meows: int
@dataclass
class Dog:
pet_type: Literal["dog"]
barks: float
def func(x: Cat | Dog):
if x.pet_type == "cat":
reveal_type(x) # Cat
else:
reveal_type(x) # Dog
For a full list of type guard patterns supported by pyright, refer to this documentation.
@mikeshardmind said:
intersections (future) are significantly simpler to correctly specify if this hole is fixed
I don't buy that argument. You're conflating two separate issues. Intersections can be added to the type system without addressing this hole. There's no reason the specification of intersections should depend on this. Even if this hole still exists, intersections would work fine for the use cases they are targeting.
I'll point again to TypeScript, which supports intersections and does not enforce invariance of mutable variable overrides. Practicality over purity.
Returning to the original topic of this thread, I see a couple of ways we could proceed:
reportIncompatibleVariableOverride
check. This would satisfy the enhancement request.I'm leaning toward option 1. I think this can be justified because this diagnostic rule is enabled only in "strict" mode. Those who have opted in to strict type checking are already sold on the benefits of type checking, and they're going to be less likely to see these new errors as an inconvenient false positive and more likely to see them as type-related fragility that they should address in their code.
I'm going to leave the issue open a bit longer to invite opinions from other people — here or in the mypy issue tracker.
I'd feel better about proceeding with option 1 if there was some commitment from the mypy maintainers to also support implement this enhancement. It's best for type checkers to remain consistent unless there's good reason to diverge. Each time we deviate, we create pain for developers who want to use multiple type checkers for their code base.
Jukka's latest response on the mypy issue tracker thread makes it sound like he's supportive. Perhaps we can now get someone to volunteer to post a PR so the issue doesn't go dormant for another six years.
If you're writing code for production systems that must be provably correct (utilizing formal methods or similar techniques), then Python is probably a poor language choice. I've implemented systems like this, and it would never have occurred to me to use Python for these systems. Even if the Python static type system were provably 100% sound (and it's clearly not — and is unlikely to become so), the Python runtime is full of exceptions, special cases, unexpected behaviors, and other complexities that were added for usability and backward compatibility reasons. And most Python libraries are not built with provable correctness (or even clearly-defined interface contracts) in mind. So perhaps this comes down to a mismatch between your requirements and the language you're using. It's interesting that you readily ruled out TypeScript but not Python. I think of them as similar, although in my personal experience, I find that the code I write in TypeScript is much more robust and bug-free than the code I write in Python.
Even if Python is an appropriate language for your use case, you're going to find that your requirements and expectations are pretty disjoint from mainstream Python developers. That could explain the apparent "impedance mismatch" that you've observed in recent discussions with the broader Python typing community. Most members of this community are focused on serving the needs of mainstream Python developers, and we prioritize our investments accordingly.
Without going to far into it (And possibly saying more than I should), there's a lot of code that predates stricter rules that have come into play over time. We have additional tooling for various things, however there's a looming deadline for type safety as a requirement on a few "Critical" projects. The required amount of safety is somewhere between where python typecheckers currently get, and what Kotlin gets. Had I noticed these existing holes in python type checkers earlier, we wouldn't be having this discussion because these projects would either not be in python or we'd have implemented our own type checker to meet our needs. As it stands, I'm quite likely to make the recommendation they cease being in python, as even with option 1 and this fixed, the prevailing attitude that correctness doesn't matter doesn't leave confidence for there being a future where correctness is prioritized, and the cost of creating an all-new type checker vs just using a language that does support the evolving guidelines on safety trumping developer ease internally doesn't seem worth it. We'd be creating a type-checker that then intentionally diverges from the ecosystem, which would make it harder to onboard developers. That's why my goal has been to improve correctness in the ecosystem. If the ecosystem as a whole doesn't see correctness as a goal, I should step away and spend my time advocating against the use of python instead in situations where it matters, rather than attempt to bridge a gap with those uninterested in even the subset of python which can be typed having a notion of type safety.
intersections (future) are significantly simpler to correctly specify if this hole is fixed
I don't buy that argument. You're conflating two separate issues. Intersections can be added to the type system without addressing this hole. There's no reason the specification of intersections should depend on this. Even if this hole still exists, intersections would work fine for the use cases they are targeting.
I'll point again to TypeScript, which supports intersections and does not enforce invariance of mutable variable overrides. Practicality over purity.
With all due respect, you yourself found issues with TypeScript's intersections while we were discussing it, such as treating T & Any as Any vs irreducible, and an undocumented sort that is an implementation detail for some conflicts According to your own findings, simply refactoring to rename something could change the behavior of an Intersection in TypeScript. TypeScript isn't a good example here unless your argument is that I should abandon any hope of a sound type system in python even via long-term improvements to definitions.
The specification for Intersections get simpler because if it's properly impossible to modify mutable fields in subclasses, then the specification for mutable fields is that they need to be identical or only defined on a single operand of the intersection, rather than creating rules for resolving what types such an attribute would be detected as by trying to intersect all the way down.
the prevailing attitude that correctness doesn't matter doesn't leave confidence for there being a future where correctness is prioritized
If I understand you correctly, you're using the term "correctness" to mean "a type system that provides 100% adherence to type theory and a type checker that fully validates conformance to that type system". That is, you're using the term "correct" as a qualifier for the type system and the type checker, not the code it is operating on.
When I use the term "correctness", I use it to describe the Python code being developed, not the tools. My definition of "correct" code is code that is bug-free — i.e. it behaves consistently with its specification under all intended uses. I actually prefer the word "robust" over "correct" because "robust" captures the notion that it's not sufficient for code to be correct only when it is first written. It also needs to be maintainable and remain correct over its lifetime as it is modified and extended by teams of engineers.
Your notion of a "correct type system" is neither necessary nor sufficient to produce code that is robust. A "reasonably complete and sound" type system, if used correctly, will catch the vast majority of bugs that can be caught through static analysis in a dynamically-typed language. If you're relying only on static type checking to catch bugs in your code, you will still have buggy code. No type system — even one that is provably 100% sound and complete — will catch all bugs. To have robust code, you need to also employ good software design practices, avoid certain code patterns that lead to fragility, and use sound testing strategies (unit testing, integration testing, fuzzing, pen testing, performance testing, etc.). For systems that demand the highest levels of correctness guarantees, you may need to go further and use techniques like formal methods.
I've invested thousands of hours of my time into the development of pyright. My motivation is to help developers write robust (bug-free and maintainable) Python code as efficiently as possible. I can't speak for the entire Python typing community, but I suspect that this goal motivates many of those who invest their time in writing type checkers, maintaining stubs, writing documentation, answering questions, etc.
By prioritizing "type system correctness" above all, I think you're missing the bigger picture. You're trading off other important factors like usability and simplicity — two factors that are core to Python's success as a programming language.
TypeScript isn't a good example here unless your argument is that I should abandon any hope of a sound type system in python even via long-term improvements to definitions.
Once again, I don't think the overarching goal is a "sound type system". The goal is to provide tools to Python developers to help them write robust code efficiently. It has been demonstrated that there are some use cases for intersection types in Python. I think these use cases are pretty esoteric and far down the priority list compared to many other investments that the typing community could make at this time, but I concede that there are some uses for intersection types in Python.
TypeScript made some pragmatic tradeoffs in its implementation of intersections, and TypeScript users today enjoy a typing feature that works well for its intended use cases. In my opinion, the TypeScript designers struck a good balance between utility, simplicity, and usability. Yes, there are some edge cases where TypeScript's intersections are not sound from the perspective of type theory, but this doesn't matter in practice because developers rarely, if ever, hit those cases. I've used intersection types for many years within TypeScript and was never even aware of these edge cases until I constructed examples specifically to probe the boundaries. I can say with high confidence that none of these behaviors has ever resulted in an uncaught bug in any TypeScript code that I've ever written, nor have these behaviors gotten in my way of utilizing intersection types where I needed them.
If I understand you correctly, you're using the term "correctness" to mean "a type system that provides 100% adherence to type theory and a type checker that fully validates conformance to that type system". That is, you're using the term "correct" as a qualifier for the type system and the type checker, not the code it is operating on.
Not quite. If the language defines a type system that allows type errors to be invisible, then the code it is operating on may not be correct, when if the type system was behaving as any reasonable person would believe, there would be warnings before it ever left basic exploration of design. Yes, type checking should not be the only way code is checked, but we should not willingly make tools that are for type checking not actually type check things properly.
Considering the case here, it requires misuse that I would never personally write to have happened, but there were legitimately people arguing to complicate intersections definition merely because someone could construct a nonsensical subclass. The problem with unsoundness in a type system is that it pervades everything that relies on it, including the type system itself. Below this part, you talk about maintainability, yet you'd argue for a type system that needs more exceptions in its rules to account for what you see as pragmatic, complicating any notion of coherence, specification, or parity between implementations.
If you're relying only on static type checking to catch bugs in your code, you will still have buggy code. No type system — even one that is provably 100% sound and complete — will catch all bugs. To have robust code, you need to also employ good software design practices, avoid certain code patterns that lead to fragility, and use sound testing strategies (unit testing, integration testing, fuzzing, pen testing, performance testing, etc.).
Agreed, but you're making the case that static type checking should provide negative value. A False sense of type safety for things that are actually defined by peps (which is the closest thing we have to specification at the moment) and not followed is worse than not providing a check at all because you provide a false sense that this case has been checked for.
When I use the term "correctness", I use it to describe the Python code being developed, not the tools. My definition of "correct" code is code that is bug-free — i.e. it behaves consistently with its specification under all intended uses. I actually prefer the word "robust" over "correct" because "robust" captures the notion that it's not sufficient for code to be correct only when it is first written. It also needs to be maintainable and remain correct over its lifetime as it is modified and extended by teams of engineers.
The case here that would cause the underling design of using a subclassing pattern for something not shared between subclasses should be caught in code review before it ever happens, but the truth is that not all companies have such robust processes. I'm happy working in the stricter environment that I do, but robustness would argue that detecting fragile subclassing and other things should be on for everyone to prevent fragile code that can break other things unexpectedly down the line by preventing such a fragile starting point from being created in the first place.
Yes, there are some edge cases where TypeScript's intersections are not sound from the perspective of type theory, but this doesn't matter in practice because developers rarely, if ever, hit those cases. I've used intersection types for many years within TypeScript and was never even aware of these edge cases until I constructed examples specifically to probe the boundaries. I can say with high confidence that none of these behaviors has ever resulted in an uncaught bug in any TypeScript code that I've ever written, nor have these behaviors gotten in my way of utilizing intersection types where I needed them.
This doesn't argue what you seem to think it does. If nobody should ever run into one of those edge cases where it is unsound, then why should the behavior not be to detect the issue to prevent the edge case? Surely this means there should be no friction while preventing a bad case that could later change behavior simply by renaming something. And if you weren't aware of the edge cases before, can you state with confidence that none of your code is subject to it and you happen to be on the side of the edge case that behaves as you need it to?
The process is self-selecting for people that don't care about soundness as well, with peps requiring implementing to an existing type checking implementation, so you have a feedback loop here that amplifies the importance you're placing on "pragmatism" because the process filters out anyone who has different values and isn't stubborn about trying to argue for a specific notion of improvement that may not fully align with what is already well represented. There's no room to show how a fully sound type system could remain ergonomic and how implementations provide good error messages, because the premise of type-safety isn't something that is accepted.
Outside of the normal python typing circles, typing is frequently seen in one of two negative lights:
The first stems from the fact that various IDEs used to provide good completions from docstrings alone (including visual studio code) but do not anymore, and instead defer to typing. "jump to definition" will happily jump to a type stub rather than the implementation now. It gets worse when the type system doesn't support what is needed because users want the typing, but there is no good way to do it other than lie to the type checker with something mostly true, as then the lie needs to be documented and maintained.
The latter was historically because of how lacking Python's typing is, personally I've seen it get better over time and I'd been hoping to push it past that part, but I'm at the point where you're sitting here telling me that I should only view it as a chore for IDE completions because any notion of soundness I view as something a type system should provide, you're saying "just rely on other testing" for.
I think what type spec says and what type checkers implement is not expected to fully match. There are other parts of type checker implementations that intentionally deviate from sound type checking and break reasonable type system rules for pragmaticness. Spec is intended to be ideal case and type checkers should mostly follow it and it'd be nice if exceptions are documented, but full following seems very unlikely a goal.
I'd say the resolution of this issue is mostly irrelevant to the spec of Intersection types as you can have spec assume that mutable fields in subclass is a type error even though type checkers decide to not error/warn on that. There exist other places today in type checkers that behave similarly. __init__
/__new__
both break liskov compatibility in vast majority of codebases and are type soundness hole that type checkers ignore out of pragmaticness.
Also I think you strongly underestimate push back typing has in wider python community. There have been asks by users in other issues (pylance more so) to allow disabling very fundamental type checking rules. Strict mode which covers many interesting checks are rarely used. I think your usage/company experience just does not align with normal python user. I've fond of strict mode and do try to push it on codebases I work on, but fully expect basic mode to lack a lot of checks. Basic mode should not be considered for type system specification and should be expected to break from spec in various ways for what errors appear.
Also I think you strongly underestimate push back typing has in wider python community. There have been asks by users in other issues (pylance more so) to allow disabling very fundamental type checking rules. Strict mode which covers many interesting checks are rarely used. I think your usage/company experience just does not align with normal python user. I've fond of strict mode and do try to push it on codebases I work on, but fully expect basic mode to lack a lot of checks. Basic mode should not be considered for type system specification and should be expected to break from spec in various ways for what errors appear.
I don't think I do. I'm very aware of the negatives, and think if typing is to provide any value it needs to actually solve something for people. I think it's problem of "who is this for?" Is it for users? The same users that want to disable the fundamentals of type checking? Well, now if they do right-click, go to definition with pyright as the language server, they often end up in the typeshed. Is this useful to them?
What about for library maintainers? Well, IDEs stopped using docstrings to provide information, so IDEs created an artificial push for typing.
What about users who want sound typing? Nope, we don't do that here, you better explicitly test what happens when your code is given every expected type and unexpected one even though the type system should handle this.
So plainly, who is typing for if not for those who have bought into type checking? Why is it being pushed on other people who have not, thereby making it need to support them when they don't even want it and are annoyed by it?
Other languages where typing is enforced manage to generate perfectly reasonable error messages. The idea that a type system is at odds with it's users is a problem of the type system not of all type systems and says more about the decisions of those making and promoting it.
Typing is super divisive in python because it is intrusive. People are building things other than type checkers on typing, and the system doesn't even work for those who want typing. So you have external pressure from other people to add typings, even if the type system is largely pointless from the perspective of the person pressured.
Until recently, I'd been one of those "not worth my time to engage with typing contributions and discussions", and unfortunately, I think I've been proven right. It was a waste of my time to bother giving feedback, because it's been explicitly said now by people with actual decision making sway, that what I want from it isn't a goal. @mikeshardmind is right that you have a feedback loop, but it's more than that, it's a hostile echo chamber where any time someone actually wants a type system to function as a type system, they get told "why use python?" so why indeed? Is it that we need to for things that existed before the type system was pushed, that there's large scale reliance on a language that isn't evolving in a way that keeps up with modern concerns? If I have to add types to something, shouldn't they mean something?
Python is so widely used for things it "shouldn't" be used for because it's fast to write, and because of tooling and native libraries. it's "ffi glue". But this discussion shows that the people making decisions never want to have to consider that it could be more and doesn't have to be an annoyance to users for that to happen. They see the current annoyances users have and blame the type system, not the people pushing a broken type system on people who don't want it.
The people who say "oh just don't use python for that" are saying it from a point of luxury that ignores how large code bases change over time, or how hiring decisions sometimes treat python as the "lowest common denominator", which means more devs who can work on it, but may not have the experience of formal types. For those people, python's type system being inconsistent is a nightmare. How do they learn it? It isn't self consistent, there's no good document to read about it, do they go follow a bunch of blog posts that reflect the history of an implementation? I'd go a step further than what's already been said in this thread and say that a poorly defined and not adhered to type system isn't just bad for code, it's bad for people learning which exacerbates people's frustrations when they feel it is forced on them.
I do use python for core production systems and have like 100k lines of code. A type system does not need full soundness to be useful and have caught many bugs with current type system. Even languages with much more focus on type system like haskell have each loop holes/unsafe mechanisms you can use.
I view current python typing system as like 95% (rough number) solution for consistency checks. There are many refactors I can make with much more confidence with types then when codebase lacked them. I will often update some apis in backwards incompatible way/change their types, and rely on type checker to track all places I need to update and have that work out safely. I think being useful but incomplete is expectation and your arguments feel very dismissive of people having gained a lot of people in spite of soundness holes and that some holes in practice rarely are an issue. If a full soundness is a hard requirement for type system to be useful I think python and almost all languages fail that.
For most users they learn it from mypy/typing/pyright documentation and never worry about knowing all rules. Many recent typing features that are quite useful to some code (paramspec decorators) are unknown to them and just basic usage + simple types is fine. I very rarely even see people write custom protocols. Basic understanding of variance is something many python developers treat as advanced topic including core developers that maintain language.
This has devolved into somewhat of a religious debate. @mikeshardmind, I've attempted to explain my viewpoint and the motivations behind pyright. You have a different viewpoint, which is fine. I think I understand your position, but I don't agree with parts of it. Let's leave it there.
When it comes to pyright and pylance, I have a good understanding of our user base, and I'm going to continue to focus on their needs and priorities.
I'd like to turn this thread back to its original purpose — discussing a specific enhancement request for pyright. If you want to debate this further with the broader type community, feel free to post to the python/typing discussion channel.
@erictraut How else should it have gone? you brought up other type systems to say why something shouldn't be in basic, while ignoring the issues of your own argument, then implied that they shouldn't be using python.
As one of the major repos hit by this, I'm curious if there'd be a way around this other than using typevars, could I perhaps leverage Final more somehow? (currently you can't overwrite a final variable in a subclass which is fair enough)
PEP 591 defines Final
to mean both "read only" and "illegal to override". What we need here is just the "read only" half of that. There have been some discussions about adding something like that (see here and here).
@Gobot1234, are you open to switching to a @property
? That works only for instance variables, not class variables, so I don't know if it's viable for your use case.
Wouldn't property just add another layer of indirection and type unsafety? I'm just imagining adding a property (clan
) which forwards _clan
which'd have to be a union or Any depending on how lazy I'm feeling. Class variables were just used for conciseness. I might throw a bunch of defaulted type vars at it until we get ReadOnly (I forgot about that ReadOnly proposal maybe it would be nicer than TypedMapping after all).
You can remove the definition that isn't shared from the base class (moving them to the subclasses), and export a type that's the Union of the subclasses. I largely think this is something best handled by generics, but there are other options here.
The ability to narrow unions safely by the type of an attribute of something, rather than the handful of things currently supported would make this an actually useful approach, as your case largely involves a set of things that are collectively there or None
, as you wouldn't need to also export the subclasses for explicit isinstance, but if this isn't something checked in basic, then this narrowing would only be "safe" with strict.
I agree unions would be nice in some circumstances but I don't think I can make it work here which is annoying
Implementing this wasn't too bad aside from running into a typing bug https://github.com/Gobot1234/steam.py/commit/d6f41e687ef413dbef4d241d1c98b4ca0d5dfe80. I'd be happy if this was implemented as a normal check for better safety
I've decided to move forward with this change. It will be included in the next release of pyright.
This is addressed in pyright 1.1.328, which I just published. It will also be included in a future release of pylance.
This should and does not produce an error. Specifically, B is an invalid subclass of A. See: https://github.com/python/typing/issues/1458#issuecomment-1716257975 and https://github.com/python/mypy/issues/3208 for context.
The unsafe nature shows up if B has a reliance on the subclass, but a function taking A modifies foo as only a string. This should be handled by users with a generic instead, and this should be invariant as lists are for the same reasons.