Open gvanrossum opened 4 years ago
See also #13.
I think it's valuable to allow instances to customize their __match__
behavior too. Here's a slightly different model, where instances have their __match__
method invoked, and all calls in pattern expressions (like your example above) are rewritten to use a deferred call to __deepmatch__
.
What should be the signature of
__match__()
?
I think it's important to keep the basic functionality simple and easy to override.
def __match__(self, target: Any, /) -> Optional[Mapping[str, Any]]: ...
None
if no match, and a (possibly empty) mapping of local names to bind if successful.
Here's where the advanced functionality gets a bit tricky. I think the least complicated solution for handling nested matches is turn all calls into a DeferredPattern(value, args, kwargs)
, whose __match__
method calls value.__deepmatch__(target, *args, **kwargs)
... or something. It's a clean interface that solves the problem of having to invert the call stack.
And should there be a default
type.__match__()
?
Yes, but I think they should differ slightly for object
and type
:
class object:
...
def __match__(self, target: Any, /) -> Optional[Mapping[str, Any]]:
if self == target:
return {}
return None
__deepmatch__ = None
class type:
...
def __match__(cls, target: Any, /) -> Optional[Mapping[str, Any]]:
if isinstance(target, cls):
return {}
return None
def __deepmatch__(cls, target: Any, /, *args: Any, **kwargs: Any) -> Optional[Mapping[str, Any]]:
names = cls.__match__(target)
if names is None:
return None
match args:
case ():
pass
case (arg?,):
m = arg.__match__(target)
if m is None:
return None
names |= m
else:
# Others can choose to handle this differently.
raise TypeError(f"__deepmatch__ expected at most 1 positional argument at call site (got {len(args)})")
for key, value in kwargs.items():
try:
attr = getattr(target, key)
except AttributeError:
return None
m = value.__match__(attr)
if m is None:
return None
names |= m
return names
@brandtbucher proposes a library of helpers in the stdlib that can be used for various cases...
I'll also add that they could be very useful if inherited as mix-ins for custom __match__
/__deepmatch__
behavior.
I have been thinking about an API like the __unapply__
method in Tobias Kohn's proposal. (Though I'd name it __match__
.)
Maybe that's what you're proposing too? I'm not entirely sure how __match__
and __deepmatch__
are called in your proposal. E.g. if we have this case (using my syntax for now):
match target:
case FooBar(Foo(a), Bar(b, 0)): ...
what is being called?
In my proposal it would call FooBar.__match__(target)
, which should return two values (in Tobias' proposal as a tuple). Let's say we have captured these values as x
and y
. Then we call Foo.__match(x)
, expecting a tuple of one value, which is assigned to a
. Finally we call Bar.__match__(y)
and expect it to return a tuple of two values. The first is assigned to b
, the second must be equal to 0
.
Tobias has lots of special cases for popular built-in objects, annotated classes, dataclasses, and others, to make things smooth without having to add __match__
methods everywhere, but that's just engineering work.
I see. But what about keyword arguments? I imagine that in many cases, all three of these would be equivalent:
FooBar(Foo(a), Bar(b, 0))
FooBar(Foo(a), bar=Bar(b, 0))
FooBar(foo=Foo(a), bar=Bar(b, 0))
That's originally why I came up with __deepmatch__
, to allow classes to handle positional and keyword arguments themselves. I think I like your model better though, where the interpreter does the work... it's much easier to explain, and for users to implement.
To solve the keyword issue, maybe __match__
should return both an iterable and a mapping on success. For example:
>>> FooBar.__match__(target)
((<Foo at 0xabc>, <Bar at 0xdef>), {'foo': <Foo at 0xabc>, 'bar': <Bar at 0xdef>})
Perhaps in some cases there won't be overlap between the two, but here it allows us to easily match all of the patterns above.
So maybe something similar to the following default is fine?
class type:
...
def __match__(cls, target: Any, /) -> Optional[Iterable[Any], Mapping[str, Any]]:
if not isinstance(target, cls):
return None
if hasattr(target, "__slots__"):
d = {attr: getattr(target, attr, None) for attr in target.__slots__}
else:
d = vars(target).copy()
return d.values(), d
(This all sort of makes me think of __getnewargs__
and __getnewargs_ex__
, which try to solve the same deconstruction problem. I don't know if there's a clean way to reuse those, or fall back on them, though.)
IIIUC in Tobias' model, Foo(k1=p1, k2=p2)
ignores the values returned by Foo.__match__(target)
and just extracts attribute values k1
and k2
from the target using getattr()
, and then matches those against patterns p1
and p2
. (But read his description and code for unparse()
to be sure.)
We could also make __match__
return a dict, and index that dict (thus allowing Foo
to censor or synthesize attributes). Since dicts are ordered we could just use the values (ignoring the keys) for the positional form. We could even allow combining them. We could also allow omitting positional args, so e.g. FooBar(Foo(a))
would be the same as FooBar(Foo(a), _)
.
PS. I think you meant d.values()
.
IIIUC in Tobias' model,
Foo(k1=p1, k2=p2)
ignores the values returned byFoo.__match__(target)
and just extracts attribute valuesk1
andk2
from the target usinggetattr()
, and then matches those against patternsp1
andp2
.
Hardcoding attribute checks feels a bit restrictive to me...to loosely summarize many examples that come to mind:
class C:
def __init__(self, foo, /, bar, *args, unused=None, **kwargs):
self.foo = foo
self._secret_bar = bar
self._other_args = args
self._other_kwargs = kwargs
@classmethod
def __match__(cls, target):
if not isinstance(target, cls):
return None
args = (target.foo, target._secret_bar, *target._other_args)
kwargs = {
"bar": target._secret_bar,
"unused": AnythingPattern(),
**target.other_kwargs
}
return args, kwargs
Note that this has a slight oddity: bar
could be matched both positionally and by keyword in the same "call":
case C(None, 42, bar=42): ...
We could also make
__match__
return a dict, and index that dict (thus allowingFoo
to censor or synthesize attributes). Since dicts are ordered we could just use the values (ignoring the keys) for the positional form. We could even allow combining them.
Hm, maybe (it crossed my mind briefly when composing my last message). It definitely simplifies the work of writing a __match__
method, but it seems like relying on dict
order could lead to hard-to-diagnose bugs... is there any other place in the runtime that dict
order changes control flow / program logic? In fact, I'd probably amend the default implementations to return an empty tuple and force users to define __match__
if they want positional args.
Plus, target.foo
and target._other_args
would need to be handled differently, above. Especially in the case of *args
, I don't know if there's a good way to do it without creating a bunch of dummy key names (_arg0
, _arg1
, ...). Maybe we're okay with that, though? I don't know.
PS. I think you meant
d.values()
.
Thanks, fixed.
Dict order preservation is part of the language spec since 3.7. It is used e.g. to preserve keyword order in a call when the callee uses **kwds
.
IIUC Tobias doesn't have a good answer for *args
. And I'm not sure that we need one. @Tobias-Kohn Any insights here? I believe that Brandt is proposing that we should be able to write patterns like FooBar(a, b, c, d)
and get a match when the target value was created using e.g. FooBar(a, b, *[c, d])
.
I've thought about this a bit more and written some examples down... I'm really coming around to your idea of just returning a mapping of names to objects, and using that order for positional arguments. I think additional complexity isn't really justified except in pathological cases like the ones I presented. It also solves the weird duplicate argument problem that my solution had, which I see as a bigger flaw than the *args
stuff.
Besides, if it really turns out to be a big enough issue, we could always add a new hook later. But I don't think it will.
So, anyway, with my model, you could write a helper class Len
that works as expected:
class Len:
@staticmethod
def __match__(target):
if not isinstance(target, Collection):
return None
return {"_len": len(target)}
Now we can write cases like these (not all of them in the same match
statement :-):
case Len(_): print("Something that has a length")
case Len(0): print("Empty")
case Len(0|1): print("Empty or singleton")
case Len(x): print("Length is", x)
I think we can also write your Bool
and TypeIs
:
class Bool:
@staticmethod
def __match__(target):
return {"_bool": bool(target)}
class TypeIs:
@staticmethod
def __match__(target):
return {"_type": type(target)}
I'm afraid it doesn't bode well for Is(x)
or Subclass(t)
. I'm not sure I mind though, given that we can write those as guards easily.
What are you talking about? It's simple!
class _IsHack:
def __init__(self, o):
self.o = o
def __eq__(self, other):
return self.o is other
class Is:
@staticmethod
def __match__(target):
return {"_": _IsHack(target)}
But yeah, probably better left as a guard. :wink:
First off, I really like the idea of returning a dict (i.e. Optional[Mapping[str, Any]]
) as well.
Let me try to give a bit more background on my initial design/ideas. I am working on a daily basis with pattern matching in Scala, and my initial proposal is therefore heavily influenced by that (that's why I called the match-method __unapply__
as in Scala, but I think __match__
is much nicer ^_^). The signature in Scala is basically Optional[Tuple[Any]]
, where the returned tuple is then matched against the arguments in the pattern. That's what I used as a basis for my first draft.
However, this entire idea with returning tuples obviously comes from functional languages, where the primary structural datatype is a tuple (in Algol/C-languages we would much rather use a struct or record instead, and name the individual fields). In such a setting it makes sense to consider a pattern to be the inverse of a constructor, e.g. in Python syntax:
class Person:
def __init__(self, name, age, phone):
self.name = name
...
def __un_init__(instance):
return (instance.name, instance.age, instance.phone)
In Python, however, there are several issues with this approach. First off, the fields in data types are not ordered. There is no intrinsic necessity why name
should be before age
in an object/instance (this is clearly different for tuples). Secondly, the number and types of an object's attributes can differ significantly from the arguments passed into the constructor. The constructor can create new fields out of the arguments and further context, and the object might have properties, etc. (all of which, I would argue, is very common in Python). Even worse, we can add or remove fields dynamically, which raises the question: if I add or remove a field to an object, is then (as far as pattern matching is concerned) still of the type indicated by its class?
So, with a more realistic and pragmatic point of view, using positional arguments or tuples for pattern matching in Python just does not make much sense in general. Your approach with keywords is far superior and better suited.
Unfortunately, as we and all programmers are rather lazy, we probably do not want to write down all the keywords every time, and patterns based on position seem so much more convenient. In that regard, I see your idea of relying on the intrinsic order in dictionaries as a very nice compromise.
In short: I feel the direction this is taking here is a significant improvement over what I did, and I like it a lot :-).
By the way: the special cases in my proposal are mainly because I could not just alter the internal structure of built-in classes and Python's interpreter. But I would certainly prefer a cleaner and more consistent interface :-).
And yes, my initial proposal was based on the idea that case Foo(k1=p1, k2=p2):
would first check if a given value "is of type Foo
" (i.e. matches Foo
s match-method) and then check if the value has the specific attributes. I think this shows once again how my original idea developed as an extension of Scala. Nonetheless, I find the idea of __match__
returning a mapping of attribute names to values much cleaner and better.
There's one downside with __match__
returning a dict that's consulted by patterns like Foo(key=val)
.
Suppose class Foo
has an expensive property, e.g. display_name
, which combines various other attributes. If we want to be able to write the pattern Foo(display_name="Throatwarbler Mangrove")
that means (in my interpretation, anyway) that Foo.__match__()
must include display_name
in the dict it returns, which means that it must invoke that expensive property -- even if it isn't needed. (Because in this design, Foo.__match__()
does not receive any input about the pattern -- it just receives the target value.)
I'm not sure how to address this, though we could do something like "if the attribute name isn't in the dict, try getting the attribute directly from the target before giving up."
Anyway, I'd like to pin down the semantics a bit more. Here's my proposal (and what I've implemented): when the pattern is of the form Class(arg0, arg1, arg2, ..., kw0=val0, kw1=val1, kw2=val2, ...)
(where arg0
... and val0
... are patterns) we first call Class.__match__(target)
, put the result into tmp
, and then do the following:
tmp
is None, the whole pattern fails; else:arg[i]
against list(tmp.values())[i]
, failing if we run out of values, and if these all pass:val[i]
against tmp[kw[i]]
, failing if tmp
doesn't have that key.(UPDATE: I had to add a list()
call to the case for positional arguments. The real implementation should try to avoid being O(N**2)
.
Would the third stage match against all returned key-value pairs in tmp
, or just the remaining ones not already matched in stage 2?
My vote is for remaining, since it prevents users from matching the same member twice (one by position and once by keyword), which is probably a bug we should bring to their attention at worst, and an obfuscated guard at best.
Agreed that it would be weird if the pattern was Point(a, b, y=0)
and Point.Match(target)
returned {"x": 1, "y": 0}
and this was allowed. OTOH I don't know an efficient way to check for duplicates like this; removing matched positional args from the dict seems to be a lot of overhead to catch a weird but perhaps harmless corner case. Do we know of other languages that even support this kind of hybrid call?
Another fine point of my proposed spec is that, if e.g. Point()
is a 3-dimensional point with constructor Point(x, y, z)
, then valid patterns include Point()
, Point(x)
, Point(x, y)
and Point(x, y, z)
. IIRC Tobias' library would only support Point()
and Point(x, y, z)
-- the others would require wildcards, e.g. Point(x, _, _)
or Point(x, y, _)
. (Not counting keyword args.) I think this makes sense if we expect __match__()
methods to return dicts with additional key/value pairs (e.g. synthetic attributes).
Yes, returning a map with all possibilities is certainly too costly. Scala does it this way, but there the returned tuples usually contain only a few values, whereas the dicts here could become quite large. In essence, this idea boils down to something that is quite like:
def __match__(cls, instance):
if want_to_match(instance):
return instance.__dict__
else:
return None
So, if we want to be complete, we would also have to include dunder methods, say, in that dict, probably populating it with a lot of things that will never get actually used.
Perhaps it would be possible to pass the minimal set of required keys to the __match__
method? Something like:
def __match__(cls, pos_arg_count: int, keywords: set[str]) -> Optional[dict[str, Any]]:
...
It would then also be the responsibility of the __match__
method to complain if an attribute occurs twice---once as positional and once as keyword argument.
With such a signature, __match__
could also decide to fail if there are not enough positional arguments, say. Hence, if Point(x)
and Point(x, y)
do not make sense, __match__
just returns None
in that case.
Just as an aside: Jython's Swing libraries uses constructors where additional keyword arguments to the constructor are just set as attributes, i.e. you could say f = JFrame("Hello World"); f.size = (30, 20)
, or directly f = JFrame("Hello World", size=(30,20))
. I think we want an analogous semantics here for pattern matching, with possibly a few (required or optional) positional arguments and any number of keyword arguments for attribute checking, right?
Concerning efficiency, is this issue of mapping positional and keyword arguments to a dict not very similar to argument passing when calling a function? If we could utilise the same techniques, a pattern match would essentially be as efficient as a function call---which is what I would intuitively expect. But perhaps I am missing something here?
Another thing that I had considered for my draft was the question whether we want to check for the presence of attributes, but not necessarily their value, i.e. some form of hasattr(.)
. It seems to me that this is very common in duck typing. Particularly JavaScript seems to be full of structures like if (window.A) {} else { error("You lack support for A"); }
(I think in Python it is often more idiomatic to do a try/except
to check for names).
As long as attributes are cheap and simply fields, this is no issue. But I wonder whether there could be a way that we do not have to compute the value of expensive attributes if never needed, but just confirm their presence...? However, this is probably more of a hassle than what we could potentially gain, and there is still the possibility of using guards for that.
An alternative to returning a dictionary is to return some kind of "proxy object". This would be a simplified version of the object with a canonicalized set of attributes. The attribute names of the proxy would be the same as the matchable variables.
For most classes, the proxy returned would simply be 'self' - i.e. the __match__
function simply returns the object itself. The interpreter can then complete the match by direct inspection of the object's attributes. This means that for most objects, __match__
is very cheap.
However, for objects with a complex internal structure, the proxy could be a completely different class with a different implementation and different attributes. Thus if I have a Point class that stores [x, y] in an array, I could return a PointProxy that has explicit 'x' and 'y' attributes.
This is particularly useful in cases where you have an expensive-to-compute attribute: the proxy attribute can be a property getter, so it's only evaluated lazily. (Unfortunately dictionaries don't have the capability to do lazy evaluation.)
Note also that the proxy can use __slots__
which (theoretically) should make the object relatively cheap to construct relative to an ad-hoc dictionary; and in fact the __slots__
could be used for purposes of positional matching as well (although that seems a bit leaky...)
Note that the original object can still veto the match in match by returning None; thus you can still implement your Is and IsType matchers.
This idea of a proxy instead of a dictionary would also address the issue I had just raised with checking for the presence of attributes without having to compute their value, i.e. lazy attributes. But for positional arguments, I would then use an explicit mechanism so as to use the full power of such a proxy. For instance, the proxy could have a field like __pos_match_fields__ = ('x', 'y')
that properly maps positional attributes to named ones. Or we allow __getattr__(.)
to accept integers as keys ;-).
Lots of good ideas, I'll review them later this week (getting the PEP 617 implementation out in time for 3.9 alpha 6 is taking all my time).
I think @viridia's idea of returning a proxy object from __match__
that in simple cases can just be self
is a winner.
There are then several approaches to positional matching.
__match__
return a tuple (proxy, names)
where names
is a sequence of names for attributes that can be matched positionally.names
sequence an optional attribute of the returned proxy with a magical name like __pos_match_fields__
.The former deals with the issue that we already have many different conventions for indicating positional arguments: _fields
for collections.namedtuple,
annotationsfor data classes an for classes using annotations at the class level in general,
slotsfor slotted classes. Rather than having to add the same information to the class in two different ways, we can just return the desired list from
match`.
For example, collections.namedtuple()
could just add a __match__
method with this definition:
@classmethod
def __match__(cls, target):
if not isinstance(target, cls):
return None
return self, cls._fields
Compare this to the second alternative:
@classmethod
def __match__(cls, target):
if not isinstance(target, cls):
return None
return self
__pos_match_fields__ = _fields
The second alternative could also in theory be more memory efficient (no need to construct a tuple in __match__
, and the list can be precomputed at class definition time).
I can't quite decide which alternative is the more elegant API -- on the one hand, I like having all the logic inside __match__
, on the other hand I like the idea of just returning self
in simpler cases, even though now we have two new dunders instead of one. (It's also slightly simpler to generate code for, I suspect.)
Let's go with alternative 2 and see how it goes. I have some code that I'll try to update later.
Note that in the case of an expensive property, hasattr
still calls it to compute the value, so that's not ideal, but I'm not sure we need to solve that -- this problem is not new to mattern matching, and it's up to the class author not to design an API that implements feature checks using hasattr
on expensive properties.
Question. If the class doesn't have a __match__
method, what should happen? In that case class SomeClass(...)
can never match. Should we just make the pattern match raise an exception? Or should we silently treat this as a "never match" case? That would seem to violate the Zen of Python's "errors should never pass silently".
Similarly, what to do if there's no __pos_match_fields__
and positional patterns are given? I'm less sure that that's an error, because the class could return a proxy with a list of fields computed based on the specific object being matched (i.e. a __match__
for a different object call could return a longer list).
Ditto if a field is mentioned in __pos_match_fields__
but doesn't actually exist on the object -- that could be a case of a trivial __match__
method that always returns self
with a static attribute given all the possible fields.
A related question: Can a class be made matchable via the addition of a metaclass?
Can a class be made matchable via the addition of a metaclass?
Sure! Current wisdom says that metaclasses are rarely the right solution (because they don't mix, and affect subclasses), and recommends class decorators instead (e.g. @dataclass
). But it does work with metaclasses. Example:
>>> class Meta(type):
... def __match__(self, *args): print("XX", args); return None
...
>>> class C(metaclass=Meta): x: int; y: int
...
>>> C.__match__
<bound method Meta.__match__ of <class '__main__.C'>>
>>> C.__match__(C())
XX (<__main__.C object at 0x105c28e50>,)
>>>
I guess the real question is, is there a handy idiom for making a class be matchable without actually having to write a dunder method. Sounds like the decorator approach may be the way to go.
This isn't too dissimilar from some other languages that support a native matching facility - for example in Scala, you have to declare a class as a 'case class' to support this; regular classes won't do it.
To answer your question about whether the lack of __match__
should throw an error, I would say that it should throw. The reason is that if indeed it can never match any expression, then having it included in a match statement is clearly a mistake - because you could delete that line of code and have no effect on the semantics of the program.
is there a handy idiom for making a class be matchable without actually having to write a dunder method
For dataclasses, we should just add this to the @dataclass
iterator. It should basically add a __match__
method that accepts instances of the given class and returns self
(so no duck typing allowed) and a __pos_match_fields__
class attribute that is just list(__annotations__.keys())
. For collections.namedtuple
and typing.NamedTuple
we should do a similar thing.
For everything else, we could write a class decorator that adds a __match__
method like this:
@classmethod
def __match__(cls, target):
if isinstance(target, cls):
return self
else:
return None
The same class decorator could also look for __annotations__
and __slots__
(in that order, probably) and construct a __pos_match_fields__
class attribute from those.
Frameworks that have their own convention for defining fields could automatically add similar infrastructure in their base implementation (whether based on a metaclass, a class decorator, or even a regular base class) -- or they could offer a class decorator that knows about the framework's field definition conventions. (I imagine this would apply to things like Django, ORMs, and things in the SciPy world.)
I am all in favour of having a class decorator that does the work for most classes. It could perhaps even take positional match attributes as arguments to override looking in __annotations__
and __slots__
, e.g.:
@matchable("name", "number_of_legs")
class Animal:
def __init__(self, name, legs):
self.name = name
self.number_of_legs = legs
would then create a field __pos_match_fields__ = ('name', 'number_of_legs)
.
I am a bit unsure where the __pos_match_fields__
should actually live. On first thought I would have put it into the proxy returned by __match__
to allow for complete customisability of the matching. However, you suggest to make it a class attribute, which would live in the same class as the corresponding __match__
method, and I feel that this might lead a cleaner interface.
On the other hand, I would like to point out that the mapping of positions to keywords might really depend on the actual object provided, and not just on the class doing the match. To illustrate, let me briefly come back to Guido's Len()
example above. Now, I find that in Java/Scala, collections use wildly different names to denote the number of entries stored: one time it is length
, another time it is size
and sometimes it might even be something like count
. If I wanted to cover all of them with something like case Len(2):
, I would have a hard time as Len
stipulated that the positional argument (2
in our case) must map to __len__
and nothing else.
Yes, in Python we do not have this particular issue with length vs. size. But then again, there might be other more realistic use cases where the mapping of position to keyword might slightly vary.
If the __match__
is missing, I, too, would consider that an error that should not just be ignored silently. Whereas a missing specific attribute is something we should deal with more gracefully (this is almost a philosophical question: is a cat with one leg missing still a cat, although we would naturally define a cat as an animal with four legs?).
Perhaps a missing attribute just would not match anything -- neither a positional argument referring to it via __pos_match_fields__
, nor an explicit mention in the pattern itself. Anyway, the reason why I think we should be lenient here is that an object might just as well have extra attributes as well, which would go undetected (I think it would be hugely impractical to check that there are no additional or superfluous attributes present). After all, the notion of a type
is in Python often not as static as in other languages as objects might gain or loose attributes all the time.
Come to think of it, there is also a second aspect to this. Let us assume we have a pattern like case Foo(bar = 12):
, say, and want to match some object spam
against this pattern. If Foo
itself does not support matching because of a missing __match__
method, then this is clearly a programming error: the syntax itself says that Foo
must be matchable here. On the other hand, if spam
is missing an attribute bar
and therefore cannot match the pattern, it is just an object that does not match the pattern and that's that -- something we expect to happen all the time, anyway (I mean, that's what pattern matching is for, right?).
I think we're converging on agreement. I actually want to make __pos_match_fields__
an attribute of the proxy as well (it's just that, in the common/trivial case, a class attribute becomes an attribute of the proxy anyways).
And yes, if case Foo(bar=12)
were to return an object that doesn't actually have a bar
attribute but does list bar
in its __pos_match_fields__
attribute, that object just doesn't match.
Just one more thing here then. I absolutely hate having to type __pos_match_fields__
. Can we find a better color for this bike shed?
I think it's a good idea for the name to start with __match
since it will sort together with __match__
. What about __match_args__
?
I like that. It also goes with the naming convention of *args.
I also like __match_args__
. It's short and descriptive.
What about renaming __match__
to __match_kwargs__
to be consistent with the naming convention?
But that's not what it does -- __match__
is called both for Pt(x, y, z)
and for Pt(x=x, y=y, z=z)
. Besides, one is a method and one is an attribute -- these are different so should get differently formed names.
I think we should require that X
in X(...)
is a type, and raise if not. I just discovered that (since the default object.__match__
is a classmethod
) weird patterns like True(False)
and 42()
are surprisingly legal (functionally the same as bool(False)
and int()
, respectively)!
Oh, good catch. We can just insist that it inherits from type
(there may be examples of type-like things that don't inherit from type
, but in my experience that's a vanishingly small set -- it's mostly done to show off a party trick).
What would we still need to add for this? The PEP has a full section on the __match__
protocol, and it also mentions that the class of a class pattern must inherit from type
.
When we write
this should probably invoke a protocol on
Foo
, for exampleFoo.__match__(...)
.What should be the signature of
__match__()
? And should there be a defaulttype.__match__()
?@brandtbucher proposes a library of helpers in the stdlib that can be used for various cases, e.g.
Bool(True)
orBool(False)
for truthiness checkingLen(num)
Is(something)
TypeIs(some_type)
Subclass(some_type)
Alternatively you could write all of those as guards: