Closed hronro closed 2 years ago
A (hopefully helpful?) source of inspiration in OpenAPI generation design: tapir which is a Scala server/router with a philosophy comparable to Axum (router / builder pattern and no macro / annotation stuff).
The parts I'm finding interesting in the design: it's all methods added to existing structures.
You already have the router/server you built using the library, which knows about the routes, parameters, body types, etc. And there's a toOpenAPI
method which generates the appropriate OpenAPI definition object. This OpenAPI definition can then be exposed in YAML or JSON format, and mounted on the appropriate endpoint of your choice. But the OpenAPI definition being an object (let's assume it would be a struct in Rust) also allows for enhancing the docs with comments, descriptions, server information, etc. which is a really nice feature.
Annotation (or macro, in rust) driven libraries tend to be quite tedious in this regard, where would you put the annotation for the global app description? The app doc version? You have to annotate something? But what?
A good first step could be to try to generate an OpenAPI definition from the existing structs in Axum. For example: mapping endpoints to OpenAPI operations, using a set of Rust OpenAPI structs matching their concepts (Operation, Response, Schema, etc.) using existing crates like https://crates.io/crates/openapiv3 for instance .
From this point on, we could get a glimpse at what pieces of information are missing. For instance: examples for query parameters will be missing from the UrlParams extractor I suppose. Same goes for Request/Response bodies schemas, or operations (get/post/put/...) descriptions.
There is the hardest part imo, and could lead to creating another crate? Can extractors be re-worked, in some way, to include such missing information? Is it desirable? Will it make the crate too cluttered, or too complex? That's a very tough question at this point.
A note on "re-usability".
One fantastic thing in the builder approach (as opposed to annotation or macro driven libraries) is in re-using the definitions in different places.
Let's imagine an Order API with endpoints like: GET /api/orders/:order_id
and PUT /api/orders/:order_id
for respectively reading and changing a customer order.
The order_id
query param probably must respect some formating rules like: [0-9]{4}-[0-9]{4}
or whatever. It also (in OpenAPI terms) probably has a description pointing to a doc, and an example to ease the developer experience of API users.
In an annotation-driven library: chances are the API developer will have to repeat the same annotation on multiple endpoints (methods) definition.
In many "builder approaches" like Tapir (see above) or other libraries using the same idea, the developer could just write the order_id
parameter definition once, and use it in every endpoint definition. That's a big win in many regards, DRY obviously, but also in separation of concerns: a single file can contain every param (extractor in the case of Axum) with definition, formating rules, description, examples, and keep the service methods bounded to the real implementation (updating the DB, etc.).
Unfortunately I'm a very beginner in Rust, and can't even know if Axum's extractor pattern would fit this. But I thought it was worth mentioning.
Hopefully this helps in designing OpenAPI support at some point in time. Good luck with Axum, as said before, the builder approach definitely has some advantages over macro libraries (although both are great) and it's really really good to see such a library built on top of tokio and hyper. Thanks for creating this!
I experimented with the annotation-based approach and got it to generate
from
Using three macros:
#[axum_openapi::handler]
async fn handler() {}
#[derive(DescribeSchema)]
struct RequestBody {}
let app = axum_openapi::routes!(route("path", get(get_handler).post(post_handler))`
The first two are relatively straightforward, the third one I'm not too happy because it isn't very resilient.
@aesteve
Can extractors be re-worked, in some way, to include such missing information? Is it desirable? Will it make the crate too cluttered, or too complex? That's a very tough question at this point.
The way I implemented it there are two traits:
pub trait DescribeSchema {
fn describe_schema() -> openapiv3::Schema;
}
pub trait OperationParameter {
fn modify_op(operation: &mut openapiv3::Operation, required: bool);
}
where DescribeSchema
can be derived and OperationsParameter
is implemented for extract::Json<T: DescribeSchema>, extract::UrlParams<..>
etc.
The macros could also be extended to support e.g. #[axum_openapi::handler(operation = openapiv3::Operation { .. })]
.
@jakobhellermann Thanks for looking into it, though I think we should try and find a solution that doesn't rely on macros. Or at least evaluate the ergonomics of such a solution.
I haven't had the time to experiment yet, but I imagine the solution described here should work 🤞
Not looked into it, but if you could somehow make use of the existing doc generation framework cargo doc, with say a cargo swagger :) and have it look for additional information from the doc macro's - yes it means more manual doc than pure code/generation/inspection but it would also be in line with how docs are generated currently for rust systems.
--just another very loose-formed idea I am throwing out there.
It looks like you can generate the openapi descriptions purely based in traits resulting in an api like this:
fn main() {
let app = route("/pets", get(find_pets).post(add_pet))
.route("/pets/:id", get(find_pet_by_id).delete(delete_pet));
let openapi = app.openapi();
let app = app
.route("/openapi.yaml", openapi_yaml_endpoint(openapi.clone()))
.route("/openapi.json", openapi_json_endpoint(openapi));
}
In addition to that it's useful to add be able to ignore handlers or provide a fallback openapiv3::Operation
like this:
.route("/other", get(handler.ignore_openapi()))
.route("/yet/another", get(some_handler.with_openapi(my_fallback_operation)))
I was able to implement this in a third party crate only by making the fields of OnMethod
and IntoService
pub
and unsealing the Handler
.
My code is here if anyone wants to check it out.
Just wanted to say that this would ease understanding how routing code is working, probably making e.g. #174 easier to debug. At least as I understood it, generating OpenAPI docs would be as useful as listing existing routes on Rails?
I've been working on a POC in https://github.com/tokio-rs/axum/pull/170 but actually been thinking that rather than generating openapi specifically it might be better if Axum generated some Axum specific AST, which users then could write openapi generators for. That way we could support other formats or different kinds of debugging as @dbofmmbt mentions.
I'll continue working on it when 0.2 is released over the next few weeks.
@davidpdrsn or perhaps it is better to focus on OpenApi as an MVP first. Release, stabilise. And then possibly extend it in future steps ? :)
The "poem-web" library takes a very interesting approach to this problem: https://github.com/poem-web/poem/blob/master/examples/openapi/oneof/src/main.rs
Handlers are methods of structure. This allows the macro to statically generate the entire specification.
If someone wanted to explore a macro based approach that should be entirely possible to build, without having to add additional stuff directly to axum
I want to explore a direction with as few macros as possible but that shouldn't hold back others.
From https://github.com/tokio-rs/axum/pull/459#issuecomment-1004730955:
An update on this: I haven't had much motivation to work on this lately so if someone wants to continue the work that'd be much appreciated 😊
This PR should contain the overall structure and hopefully it's clear how to otherwise just ping me if you questions. The goal is also to develop things in an external crate so the work doesn't necessarily need to happen in this repo.
the only reason i look at rocket now is openapi generator https://github.com/GREsau/okapi . will donate 25 USD for generator.
Just my 2c on why it's so important to us: in addition to the nice benefits of having the OpenAPI generated for you rather than manually (and potentially incorrectly) creating it. We also use the generated spec in CI to verify that we don't accidentally break API, or that when we do change the API it's reviewed by specific code owners.
This relates to one of Rust's most fun benefits: fearless refactoring. This is the missing piece for that.
the only reason i look at rocket now is openapi generator https://github.com/GREsau/okapi . will donate 25 USD for generator.
I echo you . I had used Warp in production and it works well. For a new small microservice, I'd like to give axum or poem a try. I made a simple benchmark for warp, axum and poem and found they share similar behavior: performant, small binary ,and low memory usage. One sell point of poem is OpenAPI generating.
it should like poem's openapi? such as :#[oai(path = "/hello", method = "get", method = "post")]
@Silentdoer, axum already has all of the information other than security (easy to add using a trait on from request), and supported status codes (not sure how to fix it).
Adding macros everywhere feels like an overkill (and also sucks, I love it that axum is free from macros).
@jakobhellermann
My code is here if anyone wants to check it out.
I can't access this repository. Is there any chance you can make it accessible again?
I can't access this repository. Is there any chance you can make it accessible again?
Should be accessible now again.
@jakobhellermann : Thank you!
@Silentdoer, axum already has all of the information other than security (easy to add using a trait on from request), and supported status codes (not sure how to fix it).
Adding macros everywhere feels like an overkill (and also sucks, I love it that axum is free from macros).
Are status codes really needed for an MVP? OpenAPI generation, even if it is incomplete, it would be very helpful, not to mention that the user can probably add status codes themselves. I think what we have right now is a good MVP.
If an incomplete implementation cannot be merged into axum, it could exist as its own crate (using extension methods on Router
) until the design work needed for OpenAPI status codes is complete.
My suggestion would be to reverse the problem and generate Axum routes/scaffolding from OpenAPI specification. If done right then the boiler plate code of setting up basic scaffolding could be auto generated and developers could focus on providing business logic.
@dawid-nowak, I strongly disagree. Even if only because of my reasoning in https://github.com/tokio-rs/axum/issues/50#issuecomment-1015823343 though I have many other reasons to why I think generating from code is superior.
I feel an integration with https://github.com/juhaku/utoipa would be sufficient for most people who like the code first approach.
developers could focus on providing business logic.
@dawid-nowak, the request handler IS the business logic. It is totally backwards to start from the OpenAPI spec, since your requirements are guaranteed change over time. I want to spend my time thinking about what I'm trying to do, not how to specify that in 1000 lines of yaml.
If you start from the OpenAPI spec, you end up in one of two situations:
If you start from the OpenAPI spec, your clients might get updated before your server. That can't happen if the server code is the source of the spec.
If you start from the code, then your API will always be in sync, and you only have to specify things once. No need to learn a totally new type system (because that's what OpenAPI is) before you can implement your handler. It is overwhelming easier to specify type information in Rust than in OpenAPI.
Rust has more type knowledge than most languages. We can do better than re-specifying things that the type system already knows. It's one of the defining features of the language. We're Rust programmers, not OpenAPI programmers.
90% (yes, I made this number up, since it's actually 100% of the ones I've interacted with and I'm trying to be charitable) of the time, when people start with the OpenAPI spec, they use a single type for both the request and the response. This is almost always incorrect. Most of the time, there are optional fields in the request which are required in the response (or vice-versa), so they just mark everything as optional. This makes the generated code terrible, especially in Rust where everything will be wrapped as Option<T>
. People make this mistake not because they are stupid, but because it is easier to write. If you start with the Rust code, you are far less likely to make this mistake, since the easy way is to just specify both types separately. The path of least resistance is also the path to the more correct implementation, where the type system can help force you to build correct responses and validate your inputs.
In Rust, I don't have to think about if something is required, since that's the default. If something is Option
al, that's in your face all the time. Reversing the default from OpenAPI is a huge reduction in mistakes.
Great conversation :) All valid reasons but then.
Am I really that interested in bootstrapping my project N-times using tried and tested copy and paste technique and re-using stuff that since it worked once it must be still good.
Or wouldn't it be nicer to write my interfaces in OpenAPI spec and have a tool to bootstrap the whole project for me using state of the art and best in class idiomatic Rust code and generate all the appropriate objects so as a developer I can just provide business logic. Imagine your speed to market :)
Let's assume I have annotated my code and then what happens ?
On the contrary, with the OpenAPI first approach, one could publish the spec in a public repository, the production pipeline could take the spec and consistently generate the server code and merge it with the business logic code. If that process fails we know that there is a mismatch between business logic and the interfaces and we don't don't deploy. Otherwise we have a relatively strong guarantee the documentation matches the APIs implemented.
@alex-hunt-materialize Some thoughts
If you start from the OpenAPI spec, your clients might get updated before your server. That can't happen if the server code is the source of the spec.
Hmm, what could happen is that your clients are out of sync with your server and you have hundreds of angry clients :) Just because you have deployed latest and greatest doesn't mean that everyone has updated their client code.
If you start from the code, then your API will always be in sync, and you only have to specify things once. No need to learn a totally new type system (because that's what OpenAPI is) before you can implement your handler. It is overwhelming easier to specify type information in Rust than in OpenAPI.
You are right that if I expose the documentation as some sort of web page on the server that has annotated code then it is going to be in sync. And only then. And I think you could easily revert that logic. If I generate my server logic from OpenAPI and then merge it with my business logic and it does compile and passes all the tests then I should have a pretty good guarantee that my code matches the specified interface before I deploy it to the production environment.
Or wouldn't it be nicer to write my interfaces in OpenAPI spec and have a tool to bootstrap the whole project for me using state of the art and best in class idiomatic Rust code and generate all the appropriate objects so as a developer I can just provide business logic. Imagine your speed to market :)
Even with the code-first approach we could have this bootstrapper as a separate tool to achieve a similar speed-to-market. Though, I don't see how describing api in yaml or json would be faster than using Rust syntax. At least schemars syntax is significantly more compact than resulting json-schema document.
Or wouldn't it be nicer to write my interfaces in OpenAPI spec
I would agree with this in a sense that it's better to describe open-api related logic declaratively, the layer would not only generate the schema document, but also ensure correct validation etc. But again nothing prevents us from using Rust, here's how I do it now using schemars
and jsonschema
:
pub trait SchemaValidation: Sized + JsonSchema + Serialize {
/// JSON schema document
fn schema_document() -> &'static Value {
static CACHE: OnceCell<Value> = OnceCell::new();
CACHE.get_or_init(|| {
serde_json::to_value(schema_for!(Self))
.unwrap_or_else(|_| panic!("{}::schema_document", type_name::<Self>()))
})
}
/// Compiled JSON schema
fn schema_compiled() -> &'static JSONSchema {
static CACHE: OnceCell<JSONSchema> = OnceCell::new();
CACHE.get_or_init(|| {
jsonschema::JSONSchema::compile(Self::schema_document())
.unwrap_or_else(|_| panic!("{}::schema_compiled", type_name::<Self>()))
})
}
/// Performs validation and returns first error as a string containing buggy fields path
fn schema_validate(&self) -> Result<(), String> {
if let Err(Some(err)) = Self::schema_compiled()
.validate(&serde_json::to_value(self).unwrap())
.map_err(|mut err| err.next())
{
Err(format!("Validation error: {}", err.instance_path))
} else {
Ok(())
}
}
}
Just a side note, I wrote an example axum with utoipa. You may find it here: https://github.com/juhaku/utoipa/tree/master/examples/todo-axum.
Those who want a deeper integration between axum and utoipa just hit the thumbs up and it may find its way to the project's kanban board at some point. However the integration support could be to the level of resolving path and query parameters as well as default response (for status 200 maybe?) when I get that far in other frameworks as well. For what comes to the actual path and path operation type there is no way to access that information with the current desing easily. Unless the integration would be done completely from axums side?
The current example shows the support with raw utoipa and utoipa-swagger-ui.
An update for those wondering what the status of this is:
I don't currently have bandwidth to take this on so haven't made any progress. The approach I explored in https://github.com/tokio-rs/axum/pull/459 is still the way I'd go. Basically inferring things from the types instead of using macros. That should give greater flexibility and work better with IDEs.
https://github.com/tokio-rs/axum/pull/945 is another slightly different approach at does use some macros. I'd still wanna explore a macro free solution first before merging something like https://github.com/tokio-rs/axum/pull/459.
Regardless which approach one picks it should be doable in a separate crate and shouldn't depend on any axum internals. If someone has started working on something please share it! I will gladly provide feedback 😊
Thanks for the update David! I think #945 is waiting on your feedback. Even if it shouldn't be merged, maybe it can be a good start of an external crate.
I posted https://github.com/tokio-rs/axum/pull/945#issuecomment-1132030318 just now :)
I generally don't like having issues open that aren't actionable so I think I'll close this for now.
People will still comment on it even if you close it. Better to transfer it to a discussion or tag in properly. The fact is this is still an issue even if not actionable at the moment.
I mean people are free to still comment but its not something we'll address inside axum anytime soon so I don't see why we should keep it open. If one day we decide to build something then we can just re-open it.
aide works pretty well!
https://docs.rs/aide/latest/aide/
Perhaps something like this can be merged upstream?
Personally this idea of closing issues because there are no plans to address them soon makes little sense to me, just apply an aspirational
label or something, but so be it. Given the issue is now closed, where can we track progress on this work (whenever if eventuates) if not this issue? Should we open a topic in Discussions instead?
aide seems to be at least "maintained" (probably "actively developed"), I've been looking into updating it to axum 0.6 (non-RC) to see if it fits my use cases, but from my couple hours of search, it seems the best solution to me to have API doc in axum
@banool You can track the work wherever it's being implemented.
I understand the frustration but I stand by the decision to close this issue. I don't think it fair that things we don't intend to implement (for now at least) should remain as open issues. What makes openapi special in this regard? There are many things we could do but choose not to. Should they get "aspirational" issues as well?
I've looked a bit at aide
and think it looks very promising! Seems to be close to the design I had in mind. I recommend you try that and track improvements to it in their repo.
And you're free to open a Discussion if you want 😊
@davidpdrsn I understand the frustration on your side too with wanting to keep a clean issue tracker, I just wonder where we are meant to be able to keep an eye on the roadmap? I work on open source projects too and I agree it's there is no obvious solution here. In my case I usually opt for issues, even for things we're not planning to do soon, since that's where most people go looking. So there is nothing special about this feature request in particular, I would open aspirational issues for everything the project might get to at some point since it's a good way to crowdsource ideas, gauge community interest, etc.
Of course run the project in the way that works best for you, just my 2 cents.
My two cents, I've worked on much bigger projects, and from my experience, if you keep closing issues that don't belong on the roadmap, all you accomplish is people opening more duplicates. There's no "inbox zero" for issue trackers.
I just wonder where we are meant to be able to keep an eye on the roadmap?
The issues are the roadmap. Currently openapi support is not on the roadmap to be built into axum so there isn't an open issue for it.
if you keep closing issues that don't belong on the roadmap, all you accomplish is people opening more duplicates.
That's not my experience so far. This issue still shows up in search and it isn't locked so people can still comment.
There's no "inbox zero" for issue trackers.
It's not about inbox zero. It's about setting expectations. I don't want people to expect that since there is an open issue about openapi that means we are working on it. I often see people commenting on issues asking what the progress is, because they expect an issue means someone is working on it.
Against a lot people here, I would like to campaign for a contract-first and code generation approach, because of the following reasons:
Finally the fact, that gRPC is also designed to be contract-first and you generate your Rust code or client. Would you also like to generate your .proto
files from Rust code? What's different to OpenAPI/Swagger?
A year ago I started with sul which generates a tower::Service
from an OpenAPI contract. I still like the approach operating at tower
level, that it can be bridged to any server impl, but I think I would change it now to how prost
does it contract => struct MyServer (impl tower::Service) + async_trait MyService
.
IMO contract-first approach has its right to exist and doesn't require explicit support from the framework anyway (except a code generation tool, which can be external).
However at the end of the day, you need your code to match up with the contract, so in the end it just means doing the same work twice.
I think an interesting approach to e.g. avoid accidentally breaking the contract by making a code change is to export that contract statically, commit it to your repo and snapshot test against it. That also means someone can read it without compiling and running the code. That contract could even be some sort of contract test thing (more complex than an openapi.json file).
However at the end of the day, you need your code to match up with the contract, so in the end it just means doing the same work twice.
Why "twice"? The amount of overlapping work is just a mapping and impl and contract is always in sync, verified by the compiler.
async fn get_pets(req: GetPetsRequest /* .parameters */) -> GetPetsResponse;
).. req: GetPetsRequest /* .body */ ..
)We can also use serde_json, serde_xml, ... depending on content type of the contract and much more. I doubt it's possible to generate such contracts from code.
@DaAitch I love the idea!
That would fit our use case fantastic. I absolutely love Axum's extractor approach, it's been a real joy to work with because it's so flexible and modular and plays into Rust's strong type system. We've also been using Utoipa purely to generate frontend client code. Without meaning any disrespect to the authors of that library, it's way too much boilerplate and headache for our use, so here I am again, scouring the internet for a simple RPC solution with code client codegen. But I've kind of fallen for Axum, and having plain old HTTP JSON requests is just so easy to debug from browsers, and is ubiquitously supported. I just want strongly typed client code (for a subset of routes) ðŸ˜
A few questions come to mind.
Edit: Also, while this almost certainly can/should be a separate tool, I think having the discussion here has the advantage of letting Axum authors chime in on typings (like ideas on how to play nice with Extractors). I'm not advocating that Axum itself support this.
I would discourage from using protobuf for schemas definition as it has 2 major disadvantages:
The best approach would be to use schemas written in Rust.
If you are wring a RESTful API service, the OpenAPI/Swagger documentation is very useful. It would be great if
axum
is able to generate them automatically.