Open adriangb opened 2 years ago
I dug into this a bit, there's several tricky aspects.
One complicating factor is that users might want to declare their security models in different spots, for re-usability. For example, imagine a get_user(...)-> User
dependency that requires a "me"
scope and a get_user_items()->List[Item]
that requires the "read"
scope. If those are completely separate dependencies, how are these models joined? Are they ANDed or ORed? From a dependency injection standpoint, if they each raise an HTTPException, making this an AND (from an OpenAPI perspective) makes sense. So how would users declare an OR? Maybe some sort of composite security object that dispatches to it's children and handles the exceptions? But OpenAPI let's us OR ANDed schemes, but not the other way around (so you can't say key1 AND (key2 OR key3)
One option is to decouple execution from OpenAPI here. We can have a security
param to Operation and App: Operation(security=(APIKeyHeader(...) & APIKeyQuery(...)) | OAuth2Bearer(...)))
. I guess if the user doesn't pass in this parameter, we default to ANDing all of the models as far as OpenAPI is concerned.
Separately, users would control execution / validation by hand, using auto_error
and such
Here's an API I came up with for this:
apikey1 = APIKey("apikey1")
apikey2 = APIKey("apikey2")
oauth_model = OAuth("oauth", scopes=["read", "write"])
user_read = oauth_model.require_scopes("read")
async def get_user_id(
key1: Annotated[Optional[str], Security(apikey1)],
key2: Annotated[Optional[str], Security(apikey2)],
token: Annotated[Optional[str], Security(user_read)],
]) -> int:
if key1 and key2:
validate_keys(key1, key2)
return int(key1)
val_token= validate_token(token)
user_read.validate_scopes(val_token["scopes"])
return val_token["user_id"]
async def get_items(user: Annotated[int, Depends(get_user_id)]) -> list[str]:
return ["sandwich"]
app = App(
routes=[
Path(
"/items/",
get=Operation(get_items, security=(apikey1 & apikey2) | oauth)
)
]
)
apikey1 & apikey2
is a pretty nice way to express "both keys must be present".(apikey | apikey2) & oauth2
(mypy error and runtime failure)security
parameter to only Operation and App to get a 1:! mapping with OpenAPI.scopes
parameter to Security
and more importantly the SecurityScopes
dependency that FastAPI injects just so that you can check the security scopes that are expected. Instead, get_user
when we already declared the logic for applying the security layers in Operation(... security=...)
and (2) having to declare user_read = oauth_model.require_scopes("read")
and yhen call user_read.validate_scopes(val_token["scopes"])
__or__
and __and__
.For convenience, we could make the result of security: Annotated[??, Security((apikey1 & apikey2) | oauth))]
be usable as a dependency that enforces the dependencies and their boolean logic. Question is, what does this thing return? I don't think returning a list of lists of unnamed values is sensible.
There's a bit of massaging that could be done around here, like OAuth2
could have a validate=...
parameter that accepts a callback validate(str) -> T
where T
is the type of a validated token and is what ends up being returned. Maybe OAuth2
could then enforce the scopes itself.
Or we could even just make an OAuth client that gets the JWKs from the .well-known
URL. The pro of this is that it's not trivial to implement w.r.t caching and such, and most providers either don't have or have terrible clients. But it would bring in a lot of dependencies and complexity.
@yezz123 curious if you have any thoughts on this given your exprience with the FastAPI auth ecosystem?
Is is possible to use classes to define the security parameters ? If yes, a possible solution would be:
from typing import overload
class ApiKeySecurity(SecurityModel):
key1: Annotated[str, Security(apikey1)]
key2: Annotated[str, Security(apikey2)]
class TokenSecurity(SecurityModel):
token: Annotated[str, Security(user_read)]
@overload
async def get_user_id(security: ApiKeySecurity]) -> int:
validate_keys(security.key1, security.key2)
return int(security.key1)
@overload
async def get_user_id(security: TokenSecurity]) -> int:
val_token = validate_token(security.token)
user_read.validate_scopes(val_token["scopes"])
return val_token["user_id"]
# Annotated[int, Depends(get_user_id)]) => shoudn't Depends just return
# whatever get_user_id returns to save an annotation ? Maybe I'm missing somethign
async def get_items(user: Annotated[int, Depends(get_user_id)]) -> list[str]:
return ["sandwich"]
DualSecurity = ApiKeySecurity | TokenSecurity
app = App(
routes=[
Path(
"/items/",
get=Operation(get_items, security=DualSecurity)
)
]
)
But given that this boiler plate is going to be repeated a lot, providing helpers would be nice:
from typing import overload
apikey1 = APIKey("apikey1")
apikey2 = APIKey("apikey2")
oauth_model = OAuth("oauth", scopes="rw")
class ApiKeySecurity(ApiKeySecurityModel):
key1: apikey1.security
key2: apikey2.security
class TokenSecurity(TokenSecurityModel):
token: oauth_model.scopes.read.security
# ApiKeySecurity is validated before hand for you like forms or api params
# You can overide validation ApiKeySecurity like on any pydantic model
# if you want to customize it.
# Do we need to provide custom response formats depending of the validation ?
# If yes, an additional mechanism must be put in place, but I don't
# know xpresso enough
@overload
async def get_user_id(security: ApiKeySecurity]) -> int:
return int(security.key1)
# ApiKeySecurity is validated before hand for you like forms or api params
@overload
async def get_user_id(security: TokenSecurity]) -> int:
return security.token["user_id"]
async def get_items(user: Annotated[int, Depends(get_user_id)]) -> list[str]:
return ["sandwich"]
app = App(
routes=[
# Security has a default value
# that automatically extracts all securities from deps,
# and expect at least one of them. One generic value can also
# be passed to disable any security. E.G: for tests.
Path( "/items/", get=Operation(get_items) )
]
)
Is is possible to use classes to define the security parameters ?
Interesting thought, I think using classes in a declarative manner looks a lot nicer than my boolean operator overloading proposal.
Given that we currently do something very similar for Forms, that looks nice.
Security has a default value that automatically extracts all securities from deps, and expect at least one of them.
Yep I think this is what we'll have to do: if you have a single security dependency, we'll use that directly If you have multiple security dependencies (e.g. if you have a dependency that requires a token that gets re-used in multiple operations, and then your operations also requires some other security) then you must pass in an explicit security model (a class like the ones you described) that is used to build the relationship between the security models and display it in OpenAPI.
This makes the simple case easy and the complex case possible. It's also unambiguous: we can simple raise an exception and refuse to start the application if the user uses multiple security models and does not explicitly give us their relationship.
@overload
Is there any specific reason you used overload in your examples?
My understanding is that at runtime only the last definition of the function is preserved and all of the @overload
information is lost, so as far as Xpresso is concerned it would look like this:
async def get_user_id(security: Union[ApiKeySecurity, TokenSecurity]]) -> int:
if isisntance(security, ApiKeySecurity):
return int(security.key1)
return security.token["user_id"]
async def get_items(user: Annotated[int, Depends(get_user_id)]) -> list[str]:
return ["sandwich"]
Which I think makes using @overload
a bit redundant, unless users want to do it just for their own documentation purposes.
I think that if I summarize your ideas, it would look something like this:
Operation(..., security=RootModel)
so that we know how to build the OpenAPI.from xpresso.security import OAuth, SecurityModel
oauth = OAuth("oauth2", scopes=["scope1", "scope2"])
class GetUserSecurity(SecurityModel):
token: Annotated[str, oauth]
async def get_user(security: Security[GetUserSecurity]) -> None:
...
This would result in:
security:
- oauth2: [scope1, scope2]
from dataclasses import dataclass
from xpresso.security import OAuth, Security
oauth = OAuth("oauth2", scopes=["scope1", "scope2"])
apikey = APIKey("apikey") # the security scheme name is apikey
class GetUserTokenSecurity(SecurityModel):
token: Annotated[str, oauth]
class GetUserApiKeySecurity(SecurityModel):
api_key: Annotated[str, apikey]
class GetUserSecurity(SecurityModel):
token: GetUserTokenSecurity | None
api_key: GetUserApiKeySecurity | None
async def get_user(security: Security[GetUserSecurity]) -> None:
...
OpenAPI:
security:
- oauth2: [scope1, scope2]
- apiKey: []
from dataclasses import dataclass
from xpresso.security import OAuth, Security
oauth = OAuth("oauth2", scopes=["scope1", "scope2"])
apikey1 = APIKey("apikey1")
apikey2 = APIKey("apikey2")
class GetUserTokenSecurity(SecurityModel):
token: Annotated[str, oauth]
class GetUserApiKeySecurity(SecurityModel):
api_key_1: Annotated[str, apikey1]
api_key_1: Annotated[str, apikey2]
class GetUserSecurity(SecurityModel):
token: GetUserTokenSecurity | None
api_key: GetUserApiKeySecurity | None
async def get_user(security: Security[GetUserSecurity]) -> None:
...
OpenAPI:
security:
- oauth2: [scope1, scope2]
- apiKey1: []
apiKey2: []
Annotated[int, Depends(get_user_id)]) => shoudn't Depends just return whatever get_user_id returns to save an annotation ?
That would be nice! But Annotated
/PEP 593 does not work like that. There is no way to declare "look for the first annotation, which is a callable returning T
and make the type T
".
In this specific scenario though, we could do something a tad bit wonky but which should work. Instead of returning the token itself to the user as a string, we can return an instance of the class they declared as their security scheme, and the token can be an attribute on that object. That object can also include the scopes they declared as class variables, so everything they would need to verify the jwt and compare it's scopes to the required scopes is contained nicely in a single object.
# see above
from xpresso.security import OAuth2, SecurityModel
class MyOAuth2Model(OAuth2):
scheme_name = "oauth2"
scopes = {"scope1", "scope2"}
class GetUserSecurity(SecurityModel):
oauth2: MyOAuth2Model
async def get_user(security: Security[GetUserSecurity]) -> None:
assert security.oauth2.token == "ey.." # user logic to validate JWT
Then we'd get a pretty nice feature set:
MyOAuth2Model
is a security dependency because it inherits from XpressoSecurityScheme
✅ security. oauth2.token
is a string ✅ SecurityModel
can use __init_subclass__
to enforce at import time that all of it's fields' types inherit from XpressoSecurityScheme
or SecurityModel
✅ Annotated
and duplicate typing information ✅ Combining models as above would work:
from xpresso.security import HeaderAPIKey, OAuth2, SecurityModel
class MyOAuth2Model(OAuth2):
scheme_name = "oauth2"
scopes = {"scope1", "scope2"}
class MyAPIKey(HeaderAPIKey):
name = "X-Some-Header" # if unset it gets inferred from the field name wherever it is used
class GetUserAPIKeySecurity(SecurityModel):
apikey: MyAPIKey
class GetUserTokenSecurity(SecurityModel):
token: MyOAuth2Model
class GetUserSecurity(SecurityModel):
token: GetUserTokenSecurity | None
api_key: GetUserTokenSecurity | None
async def get_user(security: Security[GetUserSecurity]) -> None:
# assuming that `get_scopes_from_token` is some user function that knows how to validate the JWT
assert security.token.scopes.is_subset(get_scopes_from_token(security.token))
This is still kinda kludgy, maybe there's a better way.
If a 3rd party package wanted to provide an OAuth2
model that actually verifies the JWT for you:
class MyUserModel(BaseModel):
name: str
class UserEnforcedOAuth2Model(EnforcedOAuth2Model[MyUserModel]):
user_model = MyUserModel
scopes = {"scope1", "scope2"}
@yezz123 I'm especially curious about this last part, would this provide the right extensibility points to create a 3rd party auth library that does the whole song and dance?
@yezz123 I'm especially curious about this last part, would this provide the right extensibility points to create a 3rd party auth library that does the whole song and dance?
Yes, the last part could be just a way to use the Oauth2 and give the developer the choice of building his own 3rd party directly or building an authentication package on it, same I did with Authx while I create my own functionality on the top of what fastAPI and HTTPX provide and I create my own customized serializer that the user of authx while used as a parameter to create his own 3rd party instance, you could check that's even I provide google and Facebook but there is a way could be used for others part ex. Github
from authx.services import SocialService
from authx.api import UsersRepo
from authx.core.jwt import JWTBackend
SocialService.setup(
repo = UserRepo,
auth_backend = JWTBackend,
base_url = 'http://localhost:8000',
option = 'social',
)
You could check this package to https://github.com/frankie567/httpx-oauth built on the top of HTTPX and fastapi and used by defining the instance and environment variables, you could see how this is built https://github.com/tiangolo/fastapi/blob/master/fastapi/security/oauth2.py, and its could help to build a native one while the Oauth2 is similar in different libraries and Frameworks, we instance the scope and we give the developers the choice of building the top of scope.
I used this before it help me to understand how the flow of scopes and oauth2 could be built on top of an architecture florentcpt/poc_oauth2_native_app
I managed to cook up a tracer bullet implementation in case any of you are interested in taking a look: https://github.com/adriangb/xpresso/pull/68/files#diff-098decea0c8483a402cf0f191aa5d61c04a692f9b844491876437870c0e8aff0
There's some rough edges to smooth out.
One thing that is a shame is that you don't get any auto-complete for adding class vars like this (you do get type checking insofar as that you can't do something like override name: str
with name = 123
).
My understanding is that at runtime only the last definition of the function is preserved and all of the @overload information is lost,
Indeed, I haven't thought about that. Mypy understands it because it parses it, but we only get the residual metadata.
If you have multiple models that are not connected into one root model, you must create the root model and pass it into Operation(..., security=RootModel) so that we know how to build the OpenAPI.
Should it raise an exception if it detect multiple models without one root, and no explicit security parameters? This would prevent the user from assuming security, and if the error message is nicely written, can provide the steps to fix the code.
Security[GetUserSecurity]
Is the Security
wrapper needed ? We don't need it for forms after all. SecurityModel
could provide the interface we need. Or if we want to decouple it, create a SecurityPolicy
that most users would use instead of SecurityModel
which automatically validate, and have the interface from both Model
and Security
?
This is still kinda kludgy, maybe there's a better way.
You are defining a complex authentication logic, with 2 possible factors, using 10 lines of code. I think it's not bad. Especially since most users will not even see it, and probably want to just define a DI with get_user()
and be done with it.
In fact, the framework should probably provide some default Security policies ready to be used for get_user
.
For the User model, it's not as clear. Having a default user model linked to db, with username and email has been a problem with django on the long run. Indeed, a user may not have a username or an email. A user could even not be human.
On the other hand, asking xpresso dev users to define themself one User model every time is probably going to be annoying very fast. But maybe it's more the role of something build on top of the framework. Something that provide common User models for the most desired use cases. It's not an easy decision.
Also, to simplify custom auth, some simple generic templates to override it would be nce. EnforcedOAuth2Model
is probably a rare case where you want full control over attribute names. Most of the time, people with a custom auth scheme will be ok with the default choice.
# I'm haven't check the edge cases, but since dataclasses does a lot, it might be possible to use
# class decorators to avoid the explicit Generic[UserModelType] ihneritance, and
# to generate generate the user_model attributes and __init__ methods
@security_class
class EnforcedOAuth2Model(OAuth2Base):
user: UserModelType
@classmethod
async def extract(cls, authorization: FromHeader[str]) -> "EnforcedOAuth2Model[UserModelType]":
assert authorization.startswith("Bearer ") # don't put this in the doc, or people will use it in prod, and it will break with -O
token = authorization[len("Bearer "):]
payload = json.loads(token) # or validate JWT, etc.
scopes = set(payload["scopes"])
assert scopes.issuperset(cls.scopes)
return cls(user=cls.user_model(**payload), token=token) # catch Pydantic ValidationError and raise an HTTPException
token: GetUserTokenSecurity | None
Unrelated, but I love this syntax so much more than Optional[]. I hope Annotated[] will be written with a shortcut one day. &
would be nice, it's close enough, but it's not exactly the same so I don't know...
Should it raise an exception if it detect multiple models without one root, and no explicit security parameters? This would prevent the user from assuming security, and if the error message is nicely written, can provide the steps to fix the code.
Yep exactly there will be only 2 cases supported:
Operation(security=...)
argumentSo in the case you mention an exception would be raised and the app wouldn't start. I think the error message can be pretty straightforward, we have all of the info so we can build a nice template including some of the user's class names, etc.
Is the
Security
wrapper needed ? We don't need it for forms after all.SecurityModel
could provide the interface we need.
So for forms we do need a marker (FromFormData
or FromMultipart
).
If there is no explicit marker, di
falls back to inspect.signature
to figure out how to construct a class.
This returns the parameters for __init__
, but here we want the classes to be constructed using async def __call__()
so that they can do stuff like make an HTTP request to get a JWK needed to validate a JWT.
What the Security
marker does is tell di
to call SecurityScheme.__call__
in order to construct it.
Here's a clearer example of what's going on:
import asyncio
from dataclasses import dataclass
from typing import Any
from di import Container, Dependant, AsyncExecutor
@dataclass
class SecurityScheme:
foo: str = "foo"
@classmethod
async def __call__(cls) -> "SecurityScheme":
return SecurityScheme("bar")
class Security(Dependant[Any]):
def __init__(self, scheme: SecurityScheme) -> None:
super().__init__(scheme.__call__)
async def main() -> None:
container = Container(scopes=(None,))
executor = AsyncExecutor()
without_wrapper = container.solve(Dependant(SecurityScheme))
with_wrapper = container.solve(Dependant(SecurityScheme))
async with container.enter_scope(None):
res1 = await container.execute_async(without_wrapper, executor=executor)
res2 = await container.execute_async(with_wrapper, executor=executor)
assert res1.foo == "foo" # di called __new__/__init__
assert res2.foo == "bar" # this is what we want
asyncio.run(main())
Maybe we could make a special case for a class implementing an @classmethod
__call__
(this would have to go in di
)?
I could see that being useful beyond this single use case for users to be able to asynchronously construct classes.
If we did that, we could also move around the Binder interfaces a bit, for example by creating a Form
class and MultiPartForm
class that would be used like:
class MyForm(Form):
field: str
other: int
async def endpoint(form: MyForm) -> None:
...
You are defining a complex authentication logic, with 2 possible factors, using 10 lines of code. I think it's not bad. Especially since most users will not even see it, and probably want to just define a DI with
get_user()
and be done with it.
👍 yeah I realize this is an advanced use case, I just want to make sure we don't throw a bunch of boilerplate at this just for the sake of a clean design / api
For the User model, it's not as clear. Having a default user model linked to db, with username and email has been a problem with django on the long run. Indeed, a user may not have a username or an email. A user could even not be human.
On the other hand, asking xpresso dev users to define themself one User model every time is probably going to be annoying very fast. But maybe it's more the role of something build on top of the framework. Something that provide common User models for the most desired use cases. It's not an easy decision.
I feel strongly that we should not have a default user model of any kind. User models are very dependent on the final implementer's systems and domain. I think this is best left for 3rd party packages.
That said, we should make sure that these 3rd party packages are possible and even easy to write 😃 . I wouldn't be opposed to having some toy implementation in our tests / docs, just like we have a half-baked implementation of using MsgPack.
it might be possible to use class decorators to avoid the explicit Generic[UserModelType] inheritance
Indeed, that's a good idea! Luckily with this design we can punt that into the future / 3rd party packages so it's not something we'd have to worry about in Xpresso (for now).
Unrelated, but I love this syntax so much more than Optional[]. I hope Annotated[] will be written with a shortcut one day.
&
would be nice, it's close enough, but it's not exactly the same so I don't know...
foo: int & Depends(scope="app")
would be interesting indeed! Or maybe /
? `foo: int / Depends(...) / Field(...)
The one big issue with Annotated
is that (1) it requires an import and (2) it's very verbose.
This is actually called out in PEP 593, the reasoning was "let's put this out there, see how it's used and we can always come back latter to make it less verbose".
Maybe we could make a special case for a class implementing an @classmethod call (this would have to go in di)?
With a minor change in di
(PR) I was able to get rid of a decent chunk of code and complexity here! Thanks for the pushback @ksamuel
Thanks for all the work you put in this.
Ran into a couple hiccups after I tried implementing this. I'm going to think about it a bit, but I wanted to share here:
When you subclass a SecurityScheme
and start typing, you don't know what parameters are required and which are optional, or what parameters are even available:
class MyAPIKey(APIKey):
# how does a user know what they need to fill in here
# without digging into the docs?
This may not be a big deal if there's only 1-2 parameters, but I remember getting pretty frustrated by this when using Pydantic's Config
, which is similar:
class MyModel(BaseModel):
class Config(BasseModel.Config):
# is the parameter allow_extra or extra_allow?
The issue with instances is that they are not types, which would make SecurityModel
very verbose.
Contrast these two options:
from xpresso.security import SecurityModel, APIKeyHeader
# classes
class APIKey(APIKeyHeader):
name = "key"
class UsersSecurity(SecurityModel):
key: APIKey
async def endpoint(security: UsersSecurity) -> None:
...
# instances
from typing import Annotated
from xpresso import Security
apikey = APIKey(name="key")
class UsersSecurity(SecurityModel):
key: Annotated[str, Security(apikey)] # verbose
async def endpoint(security: UsersSecurity) -> None:
...
This is tangential to this topic but somewhat relevant because I'd like for there to be a single logical way to do things across forms, security models, etc.
If we were to change forms to:
from xpresso.bodies import Form
class MyForm(Form):
age: int
name: str
async def endpoint(form: MyForm) -> None:
...
It is true that users save the boilerplate of a FromFormData
marker, but what if they want to add a description to their request body? Now they'd have to do:
from xpresso.bodies import Form
class MyForm(Form):
age: int
name: str
async def endpoint(form: Annotated[MyForm, Form(description=...)]) -> None:
...
Or do we go the Pydantic route and add a Config
class?
Another option would be using __init_subclass__
:
from xpresso.bodies import Form
class MyForm(Form, description=...):
age: int
name: str
async def endpoint(form: Annotated[MyForm, Form(description=...)]) -> None:
...
It is true that users save the boilerplate of a FromFormData marker, but what if they want to add a description to their request body? Now they'd have to do:
I can see the value to have various degrees of granularity for passing config: one on the class, one on the endpoint signature, etc.
So yes, for putting things on the class, __init_subclass__
makes sense: it's better than a config class which people always find weird, and less verbose than a decorator.
But we still need to find a way to pass things at the signature level. Annotated[MyForm, Form(description=...)]
is indeed very verbose, so let's explore alternatives.
One possible approach would be to provide the equivalent of the django CBS as_wiew()
, with something:
async def endpoint(form: MyForm.override(description=...)) -> None:
...
I think it's preferable to Annotated[MyForm, Form(description=...)]
for several reason:
Annotated
. MyForm
, which is a context switch. And will confuse people. I remember when Python 2 forced you to do super(ClassName, self).parent_method()
instead of super()
, and it was super annoying: not only you had to go back check your class exact name, but nobody could remember the syntax. In the class room, my students always asked me why they had to do so. Unfortunately the param: Foo.bar(...)
syntax is a non-starter: it's technically not a valid type annotation and MyPy/Pyright will yell at you. We could do Annotated[MyForm, MyForm.override(...)]
but I fear that's not much better than the import.
Indeed, I should have tested it before writing such a comment.
I tried various versions of it, but the dynamism mypy allows is way more limited that what annotations accept.
I think the local override, with Annotated[MyForm, Form(description=...)]
will then be the best route, knowing most people will use the class parameters in most cases, so it should be rare enough.
How about:
from xpresso import Form
class MyForm(Form):
__config__ = Form.Config(description=...)
Or something along those lines? __config__
(or whatever we call it) would be typed in the base class so that assigning something other than Form.Config
would give an IDE error, and Form.Config
would be typed as to accept the right parameters, so you'd get auto-complete and type checking.
I do prefer the option of doing class MyForm(Form, description=...)
if you can pull it off with __init_subclass__
or a metaclass
. If not, a dunder attribute is still a better choice than an inner class.
It's doable to do it that way. The main issues I worry about are readability and autocomplete, it's not common to see this:
class MyForm(Form, description="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent facilisis vel sapien eget lacinia. Nulla malesuada rutrum sodales. Curabitur at lobortis lacus. Cras metus ipsum, laoreet ut maximus nec, euismod ullamcorper felis. Morbi erat tellus, scelerisque in accumsan in, rutrum vitae massa. Pellentesque in viverra ex. Cras consequat massa neque, et dignissim risus commodo in. Proin iaculis justo mollis feugiat varius."):
pass
And I'm not sure what formatters would even do with this.
Of course you could do:
description = "Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent facilisis vel sapien eget lacinia. Nulla malesuada rutrum sodales. Curabitur at lobortis lacus. Cras metus ipsum, laoreet ut maximus nec, euismod ullamcorper felis. Morbi erat tellus, scelerisque in accumsan in, rutrum vitae massa. Pellentesque in viverra ex. Cras consequat massa neque, et dignissim risus commodo in. Proin iaculis justo mollis feugiat varius."
class MyForm(Form, description=description):
pass
But if you have a lot of parameters I think it'd still either be a really long line or spill over the class bases into multiple lines which would be awkward:
class MyForm(
Form,
description=...,
...,
param100=...,
):
pass
This is not realistic for Form at least (we only have 4 parameters or so) but I'm just trying to push the ideas to see where they break down.
As for autocomplete, an instance (assigned to a dunder attribute) gives you autocomplete so you don't have to look at the docs to remember the parameter names. At least on VSCode with Pylance you don't get autocomplete for __init_subclass__
parameters (you get errors, but not suggestions).
I think with some of these things (another example being the instance vs. class thing for defining security models) it may be worth supporting multiple options at first, just to get user feedback and explore edge cases (like the above). Then once the landscape is clearer, we can choose 1-2 winners and remove the other ways of doing things. I think as long as we're clear about what we're doing that's okay for an early stage project like this. All of this is as long as actually implementing multiple ways to do things would be easyish (in this case I think it would be).
It's version 0.21.0 so for now, there is no stability guaranty, that's the point.
So I take it you like the plan of, in some cases, offering multiple ways to do something with the idea of keeping only some of them long term?
Depends of your definition of like.
When I do this, I know I have to implement migration facilities later, which is more work: either some adapter, some scripts giving warnings when running the dev server, etc. It's one thing to break the API, it's another to let your users dry.
So on my projects, I try to avoid it because I know it will add a maintenance burden to my shoulder. I'd rather postpone the feature, code a few stuff on my own with a private alpha offering both solutions, see which one I use the most myself, and settle on one.
But I think it's an acceptable solution to the problem, provided you know what it implies.
Gotcha. I'll mull over it a bit. Thank you for all of the great input.
See https://swagger.io/docs/specification/authentication/
Currently we only support OR for security models: if you specify multiple security models, if any of them passes auth passes. OpenAPI supports combing OR and AND. Technically you can achieve this by setting
auto_error=False
, but this isn't reflected in OpenAPI.A better approach would be something like:
AnyOfSecurity
would just be Dependent subclass, with no effect on runtime, but that we can use when generating OpenAPI docs. This has a nice parallel with how we handle forms.