Open erikwright opened 6 years ago
The point is that it is too hard to track which classes are instantiated in the body of a given function, and which are not. If we would have such tracking, then we could allow calling functions with abstract classes at particular positions.
Taking into account that such tracking would take some effort, and that during the year of current behaviour, this is a first such request, I would recommend just using # type: ignore
. Even if this will be implemented at some point, this is quite low priority.
As far as I can tell, the same problem applies to Protocol
as well. Is there any way to have a TypeVar that references anything abstract?
@glyph Perhaps you could use # type: ignore
to silence the error as suggested above? It's clearly not optimal, though. Can you give more information about your use case?
I wonder if we could enable Type[x]
with abstract and protocol types but disallow creating an instance (i.e they wouldn't be callable). They could still be used for things like isinstance
checks.
What I’m trying to do is to write a class decorator, @should_implement(SomeProtocol)
which type-checks the decorated class to ensure it complies with the given protocol, so ignoring the error would obviate the whole point ;-).
Makes sense, though I'm not sure if lifting the restriction would be sufficient to allow the decorator to be checked statically. Right now a runtime check is probably the best you can do with that syntax, at least without a plugin. For a static check you could use a dummy assignment (which is not very pretty).
Increasing priority to normal since I think that the current approach is too restrictive. I'm not sure what's the best way to move forward, though.
@JukkaL 🙏
@JukkaL I think I found a workaround, leveraging the little white lie that Protocol
s without constructors are callables that return themselves:
from typing import Callable, Type, TypeVar
from typing_extensions import Protocol
class AProtocol(Protocol):
x: int
protocol = TypeVar("protocol")
def adherent(c: Callable[[], protocol]) -> Callable[[Type[protocol]], Type[protocol]]:
def decor(input: Type[protocol]) -> Type[protocol]:
return input
return decor
@adherent(AProtocol) # No error; Yes is the expected shape
class Yes(object):
x: int
other: str
y = Yes()
y.x
y.other
@adherent(AProtocol) # We get an error here, as desired
class No(object):
y: int
I should note that there's a big problem with my workaround; you can only apply the decorator once, and then it breaks down. There's a variant here where you can abuse a Generic instead, and then write
Adherent[AProtocol](Yes)
Adherent[AProtocol](No)
but these have to come after the class body and look somewhat uglier.
I'm not sure what's the best way to move forward, though.
A possible ad-hoc solution (which may be not so bad), is to just remove this check for class decorators, because it also causes troubles for dataclasses, see https://github.com/python/mypy/issues/5374 that has 12 upvotes.
This "Only concrete class can be given" error also seems impossible to overcome for code that is supposed to accept an abstract class and return some instance of that class, even if it would have to create the class right then and there using some fancy mechanism like type(a, b, c)
. Think unittest.Mock
and similar dummy object factories.
I have also just realized that this breaks (mypy raises this error) even when you write a function that acts like isinstance(...)
with the provided class. Makes you wonder how isinstance
is actually typed in typeshed (gonna check that now).
from abc import ABC
from typing import TypeVar, Type
T = TypeVar('T')
class Abstract(ABC):
pass
def isthisaninstance(this, type_: Type[T]) -> bool:
return isinstance(this, type_)
isthisaninstance("", Abstract) # type: ignore :(
Is there any way to overcome this (other than to # type: ignore
all function/method calls?
Is there any way to overcome this
In some cases you may use if TYPE_CHECKING:
to conditionally define the base class so that mypy sees the base class as object
while at runtime it will be ABC
:
from typing import TYPE_CHECKING
if TYPE_CHECKING:
Base = object
else:
Base = ABC
class Abstract(Base):
pass
You'll lose any ABC checking by mypy, however.
@JukkaL ,
Thanks. That's not enough, though. Mypy treats (frankly, it should) classes as abstract even when they don't specify ABC
as their parent. It's enough for them to have @abstractmethod
s on them for example.
Maybe it would be possible to use a similar trick with TYPE_CHECKING
to override even @abstractmethod
(set it to some no-op decorator), but that's really pushing it :D.
Today, we have dealt with this issue in our code in a way where we use @override to allow our code to be called with an abstract class, and possibly even return its instance, however, the user has to specify the return type for the variable where they are storing it. Which isn't that bad.
from typing import Type, TypeVar, overload
T = TypeVar('T')
@overload
def do(type_: Type[T], ...) -> T:
pass
@overload
def do(type_: type, ...) -> T:
pass
def do(type_: Type[T], ...) -> T:
"""
Do ...
"""
from abc import ABC
class Abstract(ABC):
pass
var: Abstract = do(Abstract, ...)
I've had to redact some of the bits, but this works, unless somebody takes out the type annotation for var
. It's a bit wonky, because of course, if you lie to your face and mistype the type annotation for var
, do
will likely give you something different. But I think it's better than if TYPE_CHECKING
or # type: ignore
:D. Especially since we actually use this particular function with concrete classes.
Just to add another data point here:
This "Only concrete class can be given" error also seems impossible to overcome for code that is supposed to accept an abstract class and return some instance of that class, even if it would have to create the class right then and there using some fancy mechanism like
type(a, b, c)
. Thinkunittest.Mock
and similar dummy object factories.
This is what happens in Injector (a dependency injection framework, I imagine most similar frameworks will have this pattern somewhere). The Injector class' get() method is typed like this:
get(interface: Type[T], ...) -> T
The behavior is if T
is abstract an instance of something non-abstract will be provided (if bound earlier) or a runtime error occurs, and from Injector's point of view it'd be cool if this type checked: https://github.com/alecthomas/injector/issues/143 :)
I've run into this problem too: I have a function in which takes a (possibly abstract) type and returns an instance of that type; but in my case it does it by calling a class method on the type. What I'd ideally like to be able to do is declare a protocol that inherits from Type[_T] e.g.
class _Factory(Protocol[_T], Type[_T]):
def from_file(cls, file: io.IOBase) -> _T
to indicate that the argument must be a class which provides a from_file
class method returning an instance of itself (_T being covariant). Unfortunately when I try this I get 'error: Invalid base class "Type"'; and if I take the Type
base class out then I get lots of other errors referring to "_T
to use in the protocol.
If something like this was possible (and I don't understand type theory nearly well enough to say whether it makes sense), it would make it practical to express concepts like "subclass of X which is instantiatable", "subclass of X which has this __init__
signature" or "subclass for X for which this particular abstract class method has an implementation" (my use case).
Maybe try:
class _Factory(Protocol[_T]):
def from_file(cls: Type[_T], file: io.IOBase) -> _T
@JelleZijlstra thanks. I tried something like that and couldn't get it to work. Here's an example with your suggestion:
#!/usr/bin/env python3
from abc import ABC, abstractmethod
from typing import Type, TypeVar
from typing_extensions import Protocol
_T_co = TypeVar('_T_co', covariant=True, bound='Base')
_T = TypeVar('_T', bound='Base')
class Base(ABC):
@abstractmethod
def foo(self) -> None: ...
@classmethod
def from_file(cls: Type[_T]) -> _T: ...
class Derived(Base):
def foo(self) -> None: ...
class _Factory(Protocol[_T_co]):
def from_file(cls: Type[_T_co]) -> _T_co: ...
def make_thing(cls: _Factory[_T]) -> _T: ...
make_thing(Base)
It gives these errors (mypy 0.780):
type_proto.py:21: error: The erased type of self "Type[type_proto.Base]" is not a supertype of its class "type_proto._Factory[_T_co`1]"
type_proto.py:25: error: Argument 1 to "make_thing" has incompatible type "Type[Base]"; expected "_Factory[<nothing>]"
I also tried using _T_co
throughout instead of just _T
since I don't think I really understand how variance interacts with generic-self, but it didn't work any better.
I'm also hitting up against this issue. I have a similar case where I have a plugin system that utilizes sub-type filtering. The following fails with the same error when invoked where interface_type
is of an abstract class:
import abc
from typing import Collection, Set, Type, TypeVar
T = TypeVar("T")
def filter_plugin_types(interface_type: Type[T], candidate_pool: Collection[Type]) -> Set[Type[T]]:
"""Return a subset of candidate_pool based on the given interface_type."""
...
class Base(abc.ABC):
@abc.abstractmethod
def foo(self) -> None: ...
class Derived(Base):
def foo(self) -> None: ...
type_set = filter_plugin_types(Base, [object, Derived])
@Purg As I'm also working on a plugin system, would be interested in hearing more about what you're doing.
@Purg As I'm also working on a plugin system, would be interested in hearing more about what you're doing.
Hi @pauleveritt, thanks for your interest! I sent a message to your email you have listed.
Could the message be assigned its own error-code as a stopgap? That way it would be possible to silence it globally in codebases where a lot of #type: ignore
-s would be needed otherwise.
Any updates on this issue?
I think assigning this a dedicated error code is a good idea. Finding a good compromise for everyone may be hard, and with e.g. --disable-error-code=typevar-abstract
people will be able decide the safety/convenience balance themselves.
I think assigning this a dedicated error code is a good idea. Finding a good compromise for everyone may be hard, and with e.g. --disable-error-code=typevar-abstract people will be able decide the safety/convenience balance themselves.
That's a good solution for the time being. In the long run, it might be nice to have separate type, like Interface
, which is a superclass of Type
that does everything that Type
can do except support instantiation (similar to @bmerry 's suggestion). Then, the above examples could simply be written with Interface
. What do you think?
Yeah, I was thinking about this, but this will require a PEP probably, and all other type-checkers will need to agree on exact semantics. This may take a lot of time and effort, so I am not sure it is worth it.
Would also an implementation over TypeGuard be possible? So that you can tell TypeGuard somehow that only concrete classes can be returned
Would also an implementation over TypeGuard be possible? So that you can tell TypeGuard somehow that only concrete classes can be returned
No, this will not help.
The dedicated error code for this landed and will be available in the next release. You can opt-out from this check using --disable-error-code=type-abstract
(or using your config file).
I ran into this issue today, and was surprised to do so. I expected Type[T]
with an unbounded T
to match any class, even non-instantiable ones. After all, an abstract class is still a type, right? That's what I learnt at university at least. Just because people often mean an instantiable type when they say Type[T]
doesn't mean that that necessarily makes sense from the perspective of type theory.
I would like to suggest a potential solution beyond --disable-error-code=type-abstract
, which is a bit coarse (thanks nevertheless, it does help!). I have a function in my library API which should be able to take an abstract base class as an argument, and I'd rather not make all the users disable checking in general, or have to put # type: ignore
every time they use my library.
It seems to me that whether a class can be instantiated or not isn't so different from it having a particular class method. (__call__
or __new__
, and yes, I realise that that's not exactly how it is, I'm just claiming it to be a useful analogy.) If you give this to mypy
from typing import Type, TypeVar
class A:
@classmethod
def fn(cls) -> None:
pass
class C:
pass
T = TypeVar('T')
def fn(cls: Type[T]) -> None:
cls.fn()
it will tell you test1.py:14: error: "Type[T]" has no attribute "fn"
, which is entirely reasonable because this will raise if you passed C
, even though it would work for A
.
To fix this, of course, you need to constrain T
to types that have the required function:
from typing import Type, TypeVar
class A:
@classmethod
def fn(cls) -> None:
pass
class B(A):
pass
class C:
pass
T = TypeVar('T', bound=A)
def fn(cls: Type[T]) -> None:
cls.fn()
fn(A)
fn(B)
fn(C)
The above will only return an error for the last line: test2.py:21: error: Value of type variable "T" of "fn" cannot be "C"
because C
is outside of T
's bound, and that's exactly what we want.
So maybe if you want to pass a type to a function and instantiate it, you should have to do something like this:
from typing import InstantiableType, Type, TypeVar
T = TypeVar('T', bound=InstantiableType)
def fn(cls: Type[T]) -> T:
return cls()
and if you forget the bound it would error on the last line with error: Type[T] cannot be instantiated
.
But okay, Type[T]
is assumed to be bound to a virtual supertype of all instantiable types. Take that as a given. What if we could provide an explicit bound to allow abstract base classes?
from abc import ABC
from typing import Type, TypeVar
T = TypeVar('T', bound=ABC)
def fn(cls: Type[T]) -> T:
return cls()
Here mypy would tell you error: Type[T] cannot be instantiated
on the last line, but now
from abc import ABC, abstractmethod
from typing import Type, TypeVar
class A(ABC):
@abstractmethod
def absmeth(self) -> None:
pass
class B:
pass
T = TypeVar('T', bound=ABC)
def fn(cls: Type[T]) -> None:
print(cls.__name__)
fn(A)
fn(B)
would happily pass.
I don't know how easy to implement this is, and it would be a special case for sure because T
would also match classes not derived from ABC
or having ABCMeta
as their metaclass (perhaps typing.Any
could be used instead, or a new special type?), but unconstrained Type[T]
not allowing non-instantiable types is also a special case, it's backwards compatible, and it feels natural at least to me.
One possible solution would be for Python to support both a typing.Type
and a typing.AbstractType
where AbstractType
doesn't have these same constraints on the class needing to be instantiated. This could also allow for better error checking inside the functions that receive an abstract type as a parameter.
I just ran into this too. The assumption (from mypy docs) that the type would be inevitably be used to try to construct an instance is ludicrous. Even then, AnyIO has a number of abstract classes where their __new__()
methods return instances of known subclasses, so even that argument goes right out of the window.
I think this issue should be revisited. There is no reason to believe that a type[T]
can be constructed since you have no idea what its __init__
method accepts. If a function wants to ensure constructability, it should instead accept Callable[..., T]
(or a more specific parameter list). If you want a constructable concrete type, then it should accept Callable[..., T] & type[T]
when intersection is finally added.
Right now, type[T]
is compatible with Callable[[], T]
, which is a stretch:
class X:
pass
class Y(X):
def __init__(self, a: int):
pass
def f(t: type[X]) -> X: # Should ideally be Callable[[], X] -- then X would be accepted but Y rejected.
return t()
f(Y) # Passes MyPy, but crashes!
I propose making two changes:
type-abstract
MyPy error so that if a function accepts a type[T]
, it accepts any type[T]
—whether concrete or abstract, andtype[T]
should no longer imply Callable[[], T]
. (That is, it should have all of T
's interface except __init__
, which doesn't obey LSP).
type[T]
should no longer implyCallable[[], T]
.
Just to be clear, this specifically is for the case where T
is literally an unbound TypeVar
, right? If it's bound, or if it's type[SomeConcreteClass]
rather than type[SomeTypeVar]
, you'd still be able to determine its __init__
arguments
Just to be clear, this specifically is for the case where T is literally an unbound TypeVar, right? If it's bound, or if it's type[SomeConcreteClass] rather than type[SomeTypeVar], you'd still be able to determine its init arguments
I don't think you can determine the __init__
arguments of a concrete non-final class type. Consider type[X]
from my previous comment. What are its __init__
arguments? Y
matches type[X]
and Y.__init__
's signature is (self: Y, a: int)
. And some Z < X
could come along and have a whole new signature.
You're right for any other methods of course, since other methods should obey Liskov's substitution principle.
Consider
type[X]
from my previous comment. What are its__init__
arguments?Y
matchestype[X]
andY.__init__
's signature is(self: Y, a: int)
. And someZ < X
could come along and have a whole new signature.
OK, I'm convinced.
For context, I generally feel like incompatible __init__
signatures are sort of inherently broken in Python, dating back to this article on why you can't really use super()
in __init__
. The python in that article is painfully ancient, but if you modernize it the same problem still exists today, and MyPy happily typechecks this broken code in strict mode even though it still shows a traceback.
But aside from pathological cases of diamond inheritance, Mypy does have vaguely reasonable expectations of subclasses in other places, like the fact that it warns about the unsoundness of calling __init__
directly on instances, so its status as a special case method. So although it's not perfect it seems reasonable to be a bit more incrementally correct.
I agree with removing the requirement that a class be concrete, but I think at some point we should address the genuine use cases of the type-abstract
. A few weeks back, I proposed adding either a ConcreteType
or AbstractType
to resolve this. After discussing with others, I'm now leaning towards using ConcreteType
, as that way we can remove the type-abstract
error, fixing the current incompatibility with Pyright, as well as re-adding support for enforcing that classes are concrete.
I agree with removing the requirement that a class be concrete, but I think at some point we should address the genuine use cases of the type-abstract.
I supported this on python-ideas and typing-sig, but having read other people's points, I don't see the point anymore. Why should MyPy give a type error for any type[X]
? Unless X
is final, no type checker can know how type[X]
can be constructed in the first place, so instead of giving a type error, I think it should reject all construction. People who want to guarantee construction should use Callable[..., X]
.
For context, I generally feel like incompatible init signatures are sort of inherently broken in Python, dating back to this article on why you can't really use super() in init.
Let's have this discussion somewhere else 😄. In short though, I think there's a bit of schism in the Python community between people who dislike cooperative multiple inheritance and people who like it. I happen to like it, and don't agree that it's "broken".
adding either a
ConcreteType
orAbstractType
to resolve this.
Given the presence of ABCs, I think calling these "abstract" and "concrete" is both ambiguous and confusing. I think "nominative" and "structural" might be more precise words?
Let's have this discussion somewhere else 😄.
Yeah we're veering into holy war territory here so perhaps we could have a more productive conversation privately. Shoot me an email? I'd definitely be interested in your perspective on how to resolve the problems I see as intractable.
I supported this on python-ideas and typing-sig, but having read other people's points, I don't see the point anymore. Why should MyPy give a type error for any
type[X]
? UnlessX
is final, no type checker can know howtype[X]
can be constructed in the first place, so instead of giving a type error, I think it should reject all construction. People who want to guarantee construction should useCallable[..., X]
.
As mentioned above, __init__(...)
doesn't have the same Liskov Substitution Principle applied to it. I imagine as a consequence, a ConcreteType
wouldn't guaratee the safety of that, however it could guarantee the safety of an @classmethod def create(...)
, which is how I've been addressing the issue in my code. The existing type-abstract
check is actually very useful, since it prevents me from attempting to register a half-implemented plugin with my plugin manager (avoiding difficult-to-find AbstractMethodError
exceptions at runtime), but in other parts of the code, it's annoying because it prevents me from using abstract classes to perform certain lookups.
If the type-abstract
error were to be removed, would it be possible to enforce that certain methods are implemented in other ways?
wouldn't guaratee the safety of that, however it could guarantee the safety of an @classmethod def create(...)
Hmm, that's a very interesting point. I assume that you're defining create
as an abstract method on the base class:
class Base:
@classmethod
def create(cls, ...) -> Self: ...
Then, given
def f(t: type[Base]) -> ...:
t.create(...)
You want MyPy to report type-abstract
if t
has an abstract type since you don't want to call t.create
. But even if t
is abstract, that doesn't mean that t.create
necessarily fails. t.create
might return a subclass of t
, which is concrete. So, I think this type-abstract
error isn't appropriate for this case either.
For your problem, I suggest removing create
from the base class and defining it on another base class (Creatable
), which only your concrete derived classes inherit from. And then matching on type[Creatable]
instead of type[Base]
in f
.
wouldn't guaratee the safety of that, however it could guarantee the safety of an @classmethod def create(...)
Hmm, that's a very interesting point. I assume that you're defining
create
as an abstract method on the base class:class Base: @classmethod def create(cls, ...) -> Self: ...
Yep, that's correct
Then, given
def f(t: type[Base]) -> ...: t.create(...)
You want MyPy to report
type-abstract
ift
has an abstract type since you don't want to callt.create
. But even ift
is abstract, that doesn't mean thatt.create
necessarily fails.t.create
might return a subclass oft
, which is concrete. So, I think thistype-abstract
error isn't appropriate for this case either.
It's not t.create
failing that I'm worried about. Consider this code:
class Base:
@abstractmethod
def hello():
...
@abstractmethod
@classmethod
def create(cls):
...
class Derived(Base):
@classmethod
def create(cls):
return cls()
In this case, calling create
to construct Derived
will succeed, but you'll get an AbstractMethodError
much later in the program's runtime, as hello
isn't implemented. What I'm looking for is a way to prevent subclasses such as Derived
from being passed to other code unless they are fully implemented, which type-abstract
accomplishes.
For your problem, I suggest removing
create
from the base class and defining it on another base class (Creatable
), which only your concrete derived classes inherit from. And then matching ontype[Creatable]
instead oftype[Base]
inf
.
Unfortunately this doesn't really work either, since I also want to require that the subclasses implement various other abstract methods that I need for my code to work correctly. I've considered using protocols to implement these requirements, but I'm not sure how they interact with abstract methods, and even if they work correctly, requiring the same list of methods in both a base class and a protocol class would require a large amount of repeated code. Having something like a ConcreteClass
would make the above example much simpler to implement.
Essentially, the dilemma I'm faced with is that I'm in a position where the current type-abstract
error is very beneficial to my code in many places, but in other places, it has led to a massive reduction in type safety, where I've needed to silence errors due to it, which could lead to other bigger issues being missed (I actually had this happen a few weeks ago, and it was quite painful to debug). As far as I can imagine, the only way to support both in a reasonable way is to have two different Type
types to represent it in the typing module.
In this case, calling
create
to constructDerived
will succeed, but you'll get anAbstractMethodError
much later in the program's runtime, ashello
isn't implemented.
You can have the error happen in create
by simply having Base
inherit from ABC
. It will ensure that abstract classes aren't instantiated. If you don't like the unnecessary metaclass, you can accomplish the same thing with an ordinary base class like this one.
What I'm looking for is a way to prevent subclasses such as
Derived
from being passed to other code unless they are fully implemented, whichtype-abstract
accomplishes.
I understand, but I don't think it's a good error because the type checker can't know whether type[X]
is abstract or not. A variable of type type[X]
could be a derived type, which is concrete. It's not a good error. And even if some methods are abstract, there's no way to know whether the factory is abstract or not unless you had some way to specify that, which there isn't.
Unfortunately this doesn't really work either, since I also want to require that the subclasses implement various other abstract methods that I need for my code to work correctly.
Then you should pass around constructed objects rather than types. Constructed objects shouldn't have abstract methods.
Essentially, the dilemma I'm faced with is that I'm in a position where the current
type-abstract
error is very beneficial to my code in many places, but in other places, it has led to a massive reduction in type safety, where I've needed to silence errors due to it, which could lead to other bigger issues being missed (I actually had this happen a few weeks ago, and it was quite painful to debug).
Exactly, so we agree that there's either a missing typing concept, or I suggest that there may be a better design.
As far as I can imagine, the only way to support both in a reasonable way is to have two different
Type
types to represent it in the typing module.
I no longer think this is the right solution. I think you should inherit from ABC
to prevent construction of abstract objects, and then pass objects around where possible. Instead of passing around type objects, pass around class factories. The class factories always succeed in creating objects.
In this case, calling
create
to constructDerived
will succeed, but you'll get anAbstractMethodError
much later in the program's runtime, ashello
isn't implemented.You can have the error happen in
create
by simply havingBase
inherit fromABC
. It will ensure that abstract classes aren't instantiated. If you don't like the unnecessary metaclass, you can accomplish the same thing with an ordinary base class like this one.
I avoided using abc
to achieve this, as it's unavailable in the environment where my code needs to run (even getting the typing
module was a massive pain). I'll consider using ipromise, but I'd much rather have statically guaranteed type safety over a runtime error, especially since the classes I'm using can't be instantiated outside of the environment with particular requirements being met (ie certain devices I don't own being connected over USB, or particular 3rd-party software I also don't own being loaded in the host enviornment). I want to have as much statically-guaranteed safety as possible, as otherwise the issues won't be caught until my users test it. I know it's far from an idea setup, but creating a mock environment to test things in properly would require months of development time which I don't have.
What I'm looking for is a way to prevent subclasses such as
Derived
from being passed to other code unless they are fully implemented, whichtype-abstract
accomplishes.I understand, but I don't think it's a good error because the type checker can't know whether
type[X]
is abstract or not. A variable of typetype[X]
could be a derived type, which is concrete. It's not a good error. And even if some methods are abstract, there's no way to know whether the factory is abstract or not unless you had some way to specify that, which there isn't.
Why can't the type checker know whether type[X]
is abstract or not? The type-abstract
error already shows that this is possible. To clarify, what I'm proposing is that we have a type[X]
and a ConcreteType[X]
, where type[X]
is assumed to be abstract, so errors are given upon trying to call functions listed as abstract in the base class X
and ConcreteType[X]
is required to be concrete, so it is safe to call functions such as @classmethod create
and errors are given if you attempt to pass a class where not all methods are implemented. I'd be happy with any variation on this - as long as there's a way to express when you do and don't require types to be concrete.
Unfortunately this doesn't really work either, since I also want to require that the subclasses implement various other abstract methods that I need for my code to work correctly.
Then you should pass around constructed objects rather than types. Constructed objects shouldn't have abstract methods.
When these objects are constructed, they perform many operations which are costly, and can have bad side effects. For example, a few classes send messages to connected hardware to set it into the correct mode to allow for future operation. If this is done for the wrong hardware, it will lead to many issues, due to the messages being incompatible with the device that's actually connected.
I could move all this initialisation logic into another initialize
function for when the device object actually needs to be activated, but this would be somewhat an antipattern, since the whole point of __init__
is to initialise the object. It would also make getting type safety on the class more challenging, as I'd need to initialise tons of attributes to None
inside the constructor, only to actually initialise them elsewhere. Then in other methods, I'd need to deal with the fact that Mypy thinks all my attributes could be None
. Essentially, unless I wish to make all my other code much more difficult to maintain, this isn't a solution for my particular case.
As far as I can imagine, the only way to support both in a reasonable way is to have two different
Type
types to represent it in the typing module.I no longer think this is the right solution. I think you should inherit from
ABC
to prevent construction of abstract objects, and then pass objects around where possible. Instead of passing around type objects, pass around class factories. The class factories always succeed in creating objects.
How can I know that the objects constructed by the factories are always fully implemented? There's no way I can get type safety to prove that they can safely be constructed without constructing them unless the above is introduced.
I understand that my particular use case is very niche, but even for when working in an environment where things can be tested for properly, having static guarantees from tools such as mypy is a lot more meaningful than having a few tests pass.
I am really starting to wonder why you're not programming in a statically typed language then. It sounds like Python is not the best fit for your use case. But anyway.
I hadn't considered Callable
for constructible types, as Neil suggested, but that sounds like the right idea. You do need that intersection operator then though, or you can't put any other constraints on it, like the "any concrete class derived from T" that Miguel wants.
I am really starting to wonder why you're not programming in a statically typed language then. It sounds like Python is not the best fit for your use case. But anyway.
Python is the only option if I don't plan on voiding several EULAs by decompiling other people's software.
as it's unavailable in the environment where my code needs to run (even getting the typing module was a massive pain). I'll consider using ipromise,
Feel free to copy the five or ten lines of code needed to do the runtime check.
but I'd much rather have statically guaranteed type safety over a runtime error
Yes, that's fair to want that.
Why can't the type checker know whether type[X] is abstract or not? The type-abstracterror already shows that this is possible.
Please let me start again.
When you pass type[X]
, MyPy assumes that it knows the signature of X.__init__
, which I think was a mistake. Unlike all of the other methods, you can't know the signature of X.__init__
since __init__
doesn't obey LSP. So when calculating the interface of type[X]
, __init__
should not be included.
Also, MyPy is interpreting a type[X]
parameter as implying IsConcrete & type[X]
. The reason it wants to make sure that it's concrete is that it wants to prevent you from constructing abstract classes. That's a fair goal, but it's not really possible. As we already said, you don't know the signature of __init__
, so you should always block that. And as for factory methods, there's nothing wrong with calling such a method on an abstract class. I know you may not like that, but it's perfect fine:
class Base:
@abstractmethod
def f(self): ...
@classmethod
def factory(cls) -> Self: return SomeConcreteDerivedClass() # absolutely nothing wrong with calling this
class SomeConcreteDerivedClass(Base):
def f(self): ...
So this whole behaviour that you like (the blocking of abstract types being passed to functions accepting types) is ill-conceived whether or not it happens to be convenient for you.
I suspect that the real solution is for you to simply pass constructed objects or factories, both of which are already guaranteed to be concrete (since they're constructed).
as long as there's a way to express when you do and don't require types to be concrete.
There is: you construct the object!
When these objects are constructed, they perform many operations which are costly, and can have bad side effects. For example, a few classes send messages to connected hardware to set it into the correct mode to allow for future operation.
Then construct a factory object instead. The factory can construct the object. And the factory can check that the object is constructable when it's created if you like.
Then in other methods, I'd need to deal with the fact that Mypy thinks all my attributes could be None.
I agree that that would be mess, so don't do this. Construct a factory, and the factory constructs the object. If there is a lot of common data, put the data into a dataclass, so that you don't have too much repeated code.
How can I know that the objects constructed by the factories are always fully implemented? There's no way I can get type safety to prove that they can safely be constructed without constructing them unless the above is introduced.
Because you don't code factories for abstract classes. Just code them for the concrete ones. For every concrete class, build a factory class. Something like:
class AbstractBase: ...
class ConcreteDerived(AbstractBase): ...
class FactoryBase:
@abstractmethod
def create(self) -> AbstractBase: ...
class ConcreteDerivedFactory(FactoryBase):
def create(self) -> ConcreteDerived: ...
def g(factory: FactoryBase): ...
Unlike g(t: type[AbstractBase])
(which might be passed an abstract class), the above g(factory: FactoryBase)
is guaranteed to accept a concrete object since all objects are concrete.
If that's too much repeated code, write a function that produces the factory programmatically (but I would avoid this unless it's a really big deal).
The example of the factory returning a subclass is very much does throw a spanner in the works for the idea. I think that I'll just have to deal with the fact that users might be the ones catching the bugs for this one. Setting up a factory sounds like it'll just add a bunch of complexity for a project that's difficult enough to grasp as it is.
Just to add to the discussion another use case, which I believe hasn't been covered yet (there were lots of in depth comments, and I am a bit tired, so if I missed it, my apologies).
I have a bunch of classes that are meant to interpret other existing types in some way or another. These classes shouldn't be instantiated, as I do not need them to hold any variable data. Furthermore, some of these classes might point to another of themselves, to delegate behaviour under certain conditions.
You can find a (slightly expanded) MWE below:
from abc import ABC, abstractmethod
class BaseA(ABC):
@abstractmethod
def __init__(self) -> None:
...
@classmethod
@abstractmethod
def process(cls, val) -> None:
...
class A1(BaseA, ABC):
@classmethod
def process(cls, val) -> None:
... # do something with val.
class BaseB(ABC):
a_type: type[BaseA]
@abstractmethod
def __init__(self) -> None:
...
@classmethod
@abstractmethod
def process(cls, val) -> None:
...
class B1(BaseB, ABC):
a_type: type[BaseA] = A1
@classmethod
def process(cls, val) -> None:
... # do something with val, depending on the value, delegate to `a_type` for processing.
The assignment of A1
to a_type
of B1
also produces an error about concrete classes:
concrete_class.py: note: In class "B1":
concrete_class.py:22: error: Can only assign concrete classes to a variable of type "Type[BaseA]" [misc]
a_type: type[BaseA] = A1
^~
Found 1 error in 1 file (checked 1 source file)
A meta question: can someone please summarize what exactly is requested/proposed in this issue that is not covered by --disable-error-code=type-abstract
(that is already available for quite some time)?
Oh well, I just discovered that there is a bug causing [misc]
error code to be assigned to some of the errors that should have [type-abstract]
, will submit fix shortly.
In https://github.com/python/mypy/issues/2169 and https://github.com/python/mypy/issues/1843 there was discussion about using
Type[some_abc]
and how it should be allowed in a function signature, but that the call-site was expected to pass a concrete subclass ofsome_abc
. There is an implicit assumption that the objective of a function taking such a type is, ultimately, to instantiate the type.@gvanrossum said, in https://github.com/python/mypy/issues/2169#issuecomment-249001710:
I have such a use case.
I have a sequence of observers supplied by clients of my library, to which I want to dispatch events according to the abstract base class(es) that each implements. I tried to do this as follows:
Unfortunately, MyPy complains about this as follows:
Given that (AFAICT) the decision was made to not attempt to verify that the runtime type supports any specific constructor signature I'm wondering why there is nonetheless an expectation that the runtime type is constructable at all. In my case, the entire purpose of typing here is:
isinstance