tokio-rs / axum

Ergonomic and modular web framework built with Tokio, Tower, and Hyper
18.96k stars 1.06k forks source link

Support generating OpenAPI/Swagger docs #50

Closed hronro closed 2 years ago

hronro commented 3 years ago

If you are wring a RESTful API service, the OpenAPI/Swagger documentation is very useful. It would be great if axum is able to generate them automatically.

Michael-J-Ward commented 3 years ago

Just documenting some prior art in this area

aesteve commented 3 years ago

A (hopefully helpful?) source of inspiration in OpenAPI generation design: tapir which is a Scala server/router with a philosophy comparable to Axum (router / builder pattern and no macro / annotation stuff).

The parts I'm finding interesting in the design: it's all methods added to existing structures. You already have the router/server you built using the library, which knows about the routes, parameters, body types, etc. And there's a toOpenAPI method which generates the appropriate OpenAPI definition object. This OpenAPI definition can then be exposed in YAML or JSON format, and mounted on the appropriate endpoint of your choice. But the OpenAPI definition being an object (let's assume it would be a struct in Rust) also allows for enhancing the docs with comments, descriptions, server information, etc. which is a really nice feature. Annotation (or macro, in rust) driven libraries tend to be quite tedious in this regard, where would you put the annotation for the global app description? The app doc version? You have to annotate something? But what?


A good first step could be to try to generate an OpenAPI definition from the existing structs in Axum. For example: mapping endpoints to OpenAPI operations, using a set of Rust OpenAPI structs matching their concepts (Operation, Response, Schema, etc.) using existing crates like https://crates.io/crates/openapiv3 for instance .

From this point on, we could get a glimpse at what pieces of information are missing. For instance: examples for query parameters will be missing from the UrlParams extractor I suppose. Same goes for Request/Response bodies schemas, or operations (get/post/put/...) descriptions.

There is the hardest part imo, and could lead to creating another crate? Can extractors be re-worked, in some way, to include such missing information? Is it desirable? Will it make the crate too cluttered, or too complex? That's a very tough question at this point.


A note on "re-usability". One fantastic thing in the builder approach (as opposed to annotation or macro driven libraries) is in re-using the definitions in different places. Let's imagine an Order API with endpoints like: GET /api/orders/:order_id and PUT /api/orders/:order_id for respectively reading and changing a customer order. The order_id query param probably must respect some formating rules like: [0-9]{4}-[0-9]{4} or whatever. It also (in OpenAPI terms) probably has a description pointing to a doc, and an example to ease the developer experience of API users. In an annotation-driven library: chances are the API developer will have to repeat the same annotation on multiple endpoints (methods) definition. In many "builder approaches" like Tapir (see above) or other libraries using the same idea, the developer could just write the order_id parameter definition once, and use it in every endpoint definition. That's a big win in many regards, DRY obviously, but also in separation of concerns: a single file can contain every param (extractor in the case of Axum) with definition, formating rules, description, examples, and keep the service methods bounded to the real implementation (updating the DB, etc.). Unfortunately I'm a very beginner in Rust, and can't even know if Axum's extractor pattern would fit this. But I thought it was worth mentioning.


Hopefully this helps in designing OpenAPI support at some point in time. Good luck with Axum, as said before, the builder approach definitely has some advantages over macro libraries (although both are great) and it's really really good to see such a library built on top of tokio and hyper. Thanks for creating this!

jakobhellermann commented 3 years ago

I experimented with the annotation-based approach and got it to generate

OpenAPI Schema ```yaml --- openapi: 3.0.3 info: title: "" version: "" paths: "/pets/{id}": get: operationId: find_pet_by_id parameters: - in: path name: id required: true schema: type: integer format: int64 responses: default: description: Default OK response delete: operationId: delete_pet parameters: - in: path name: id required: true schema: type: integer format: int64 responses: default: description: Default OK response /pets: get: operationId: find_pets parameters: - in: query name: tags schema: nullable: true type: array items: type: string - in: query name: limit schema: nullable: true type: integer format: int32 responses: default: description: Default OK response post: operationId: add_pet requestBody: content: application/json: schema: $ref: "#/components/schemas/AddPetRequestBody" required: true responses: default: description: Default OK response /openapi.yaml: get: operationId: api_yaml responses: {} /openapi.json: get: operationId: api_json responses: {} components: schemas: NewPet: type: object properties: name: type: string tag: nullable: true type: string PetExtra: type: object properties: id: type: integer format: int64 Pet: type: object properties: new_pet: type: object properties: name: type: string tag: nullable: true type: string pet_extra: type: object properties: id: type: integer format: int64 AddPetRequestBody: type: object properties: name: type: string tag: nullable: true type: string FindPetsQueryParams: type: object properties: tags: nullable: true type: array items: type: string limit: nullable: true type: integer format: int32 ```

from

Rust Code ```rust use axum::prelude::*; use std::net::SocketAddr; use axum_openapi::DescribeSchema; #[tokio::main] async fn main() { let app = axum_openapi::routes!(route("/pets", get(find_pets).post(add_pet)) .route("/pets/:id", get(find_pet_by_id).delete(delete_pet)) .route("/openapi.yaml", get(axum_openapi::api_yaml)) .route("/openapi.json", get(axum_openapi::api_json))); let addr = SocketAddr::from(([127, 0, 0, 1], 3000)); hyper::server::Server::bind(&addr) .serve(app.into_make_service()) .await .unwrap(); } mod model { use axum_openapi::DescribeSchema; #[derive(Debug, serde::Serialize, serde::Deserialize, DescribeSchema)] pub struct Pet { #[serde(flatten)] new_pet: NewPet, #[serde(flatten)] pet_extra: PetExtra, } #[derive(Debug, serde::Serialize, serde::Deserialize, DescribeSchema)] pub struct PetExtra { id: i64, } #[derive(Debug, serde::Serialize, serde::Deserialize, DescribeSchema)] pub struct NewPet { name: String, tag: Option, } } #[derive(Debug, serde::Serialize, serde::Deserialize, DescribeSchema)] pub struct FindPetsQueryParams { tags: Option>, limit: Option, } /// Returns all pets from the system that the user has access to #[axum_openapi::handler] async fn find_pets(query_params: Option>) { println!("find_pets called"); println!("Query params: {:?}", query_params); } #[derive(Debug, serde::Serialize, serde::Deserialize, DescribeSchema)] pub struct AddPetRequestBody { name: String, tag: Option, } /// Creates a new pet in the store. Duplicates are allowed. #[axum_openapi::handler] async fn add_pet(request_body: axum::extract::Json) { println!("add_pet called"); println!("Request body: {:?}", request_body); } /// Returns a user based on a single ID, if the user does not have access to the pet #[axum_openapi::handler] async fn find_pet_by_id(path_params: axum::extract::UrlParams<(i64,)>) { let (id,) = path_params.0; println!("find_pet_by_id called"); println!("id = {}", id); } /// deletes a single pet based on the ID supplied #[axum_openapi::handler] async fn delete_pet(path_params: axum::extract::UrlParams<(i64,)>) { let (id,) = path_params.0; println!("delete_pet called"); println!("id = {}", id); } ```

Using three macros:

#[axum_openapi::handler]
async fn handler() {}

#[derive(DescribeSchema)]
struct RequestBody {}

let app = axum_openapi::routes!(route("path", get(get_handler).post(post_handler))`

The first two are relatively straightforward, the third one I'm not too happy because it isn't very resilient.



@aesteve

Can extractors be re-worked, in some way, to include such missing information? Is it desirable? Will it make the crate too cluttered, or too complex? That's a very tough question at this point.

The way I implemented it there are two traits:

pub trait DescribeSchema {
    fn describe_schema() -> openapiv3::Schema;
}

pub trait OperationParameter {
    fn modify_op(operation: &mut openapiv3::Operation, required: bool);
}

where DescribeSchema can be derived and OperationsParameter is implemented for extract::Json<T: DescribeSchema>, extract::UrlParams<..> etc. The macros could also be extended to support e.g. #[axum_openapi::handler(operation = openapiv3::Operation { .. })].

davidpdrsn commented 3 years ago

@jakobhellermann Thanks for looking into it, though I think we should try and find a solution that doesn't rely on macros. Or at least evaluate the ergonomics of such a solution.

I haven't had the time to experiment yet, but I imagine the solution described here should work 🤞

OldManFroggy commented 3 years ago

Not looked into it, but if you could somehow make use of the existing doc generation framework cargo doc, with say a cargo swagger :) and have it look for additional information from the doc macro's - yes it means more manual doc than pure code/generation/inspection but it would also be in line with how docs are generated currently for rust systems.

--just another very loose-formed idea I am throwing out there.

jakobhellermann commented 3 years ago

It looks like you can generate the openapi descriptions purely based in traits resulting in an api like this:

fn main() {
    let app = route("/pets", get(find_pets).post(add_pet))
        .route("/pets/:id", get(find_pet_by_id).delete(delete_pet));
    let openapi = app.openapi();

    let app = app
        .route("/openapi.yaml", openapi_yaml_endpoint(openapi.clone()))
        .route("/openapi.json", openapi_json_endpoint(openapi));
}

In addition to that it's useful to add be able to ignore handlers or provide a fallback openapiv3::Operation like this:

.route("/other", get(handler.ignore_openapi()))
.route("/yet/another", get(some_handler.with_openapi(my_fallback_operation)))

I was able to implement this in a third party crate only by making the fields of OnMethod and IntoService pub and unsealing the Handler.

My code is here if anyone wants to check it out.

dbofmmbt commented 3 years ago

Just wanted to say that this would ease understanding how routing code is working, probably making e.g. #174 easier to debug. At least as I understood it, generating OpenAPI docs would be as useful as listing existing routes on Rails?

davidpdrsn commented 3 years ago

I've been working on a POC in https://github.com/tokio-rs/axum/pull/170 but actually been thinking that rather than generating openapi specifically it might be better if Axum generated some Axum specific AST, which users then could write openapi generators for. That way we could support other formats or different kinds of debugging as @dbofmmbt mentions.

I'll continue working on it when 0.2 is released over the next few weeks.

szagi3891 commented 2 years ago

@davidpdrsn or perhaps it is better to focus on OpenApi as an MVP first. Release, stabilise. And then possibly extend it in future steps ? :)

szagi3891 commented 2 years ago

The "poem-web" library takes a very interesting approach to this problem: https://github.com/poem-web/poem/blob/master/examples/openapi/oneof/src/main.rs

Handlers are methods of structure. This allows the macro to statically generate the entire specification.

davidpdrsn commented 2 years ago

If someone wanted to explore a macro based approach that should be entirely possible to build, without having to add additional stuff directly to axum

I want to explore a direction with as few macros as possible but that shouldn't hold back others.

davidpdrsn commented 2 years ago

From https://github.com/tokio-rs/axum/pull/459#issuecomment-1004730955:

An update on this: I haven't had much motivation to work on this lately so if someone wants to continue the work that'd be much appreciated 😊

This PR should contain the overall structure and hopefully it's clear how to otherwise just ping me if you questions. The goal is also to develop things in an external crate so the work doesn't necessarily need to happen in this repo.

dzmitry-lahoda commented 2 years ago

the only reason i look at rocket now is openapi generator https://github.com/GREsau/okapi . will donate 25 USD for generator.

tasn commented 2 years ago

Just my 2c on why it's so important to us: in addition to the nice benefits of having the OpenAPI generated for you rather than manually (and potentially incorrectly) creating it. We also use the generated spec in CI to verify that we don't accidentally break API, or that when we do change the API it's reviewed by specific code owners.

This relates to one of Rust's most fun benefits: fearless refactoring. This is the missing piece for that.

piaoger commented 2 years ago

the only reason i look at rocket now is openapi generator https://github.com/GREsau/okapi . will donate 25 USD for generator.

I echo you . I had used Warp in production and it works well. For a new small microservice, I'd like to give axum or poem a try. I made a simple benchmark for warp, axum and poem and found they share similar behavior: performant, small binary ,and low memory usage. One sell point of poem is OpenAPI generating.

Silentdoer commented 2 years ago

it should like poem's openapi? such as :#[oai(path = "/hello", method = "get", method = "post")]

tasn commented 2 years ago

@Silentdoer, axum already has all of the information other than security (easy to add using a trait on from request), and supported status codes (not sure how to fix it).

Adding macros everywhere feels like an overkill (and also sucks, I love it that axum is free from macros).

ltog commented 2 years ago

@jakobhellermann

My code is here if anyone wants to check it out.

I can't access this repository. Is there any chance you can make it accessible again?

jakobhellermann commented 2 years ago

I can't access this repository. Is there any chance you can make it accessible again?

Should be accessible now again.

ltog commented 2 years ago

@jakobhellermann : Thank you!

SaadiSave commented 2 years ago

@Silentdoer, axum already has all of the information other than security (easy to add using a trait on from request), and supported status codes (not sure how to fix it).

Adding macros everywhere feels like an overkill (and also sucks, I love it that axum is free from macros).

Are status codes really needed for an MVP? OpenAPI generation, even if it is incomplete, it would be very helpful, not to mention that the user can probably add status codes themselves. I think what we have right now is a good MVP.

If an incomplete implementation cannot be merged into axum, it could exist as its own crate (using extension methods on Router) until the design work needed for OpenAPI status codes is complete.

dawid-nowak commented 2 years ago

My suggestion would be to reverse the problem and generate Axum routes/scaffolding from OpenAPI specification. If done right then the boiler plate code of setting up basic scaffolding could be auto generated and developers could focus on providing business logic.

tasn commented 2 years ago

@dawid-nowak, I strongly disagree. Even if only because of my reasoning in https://github.com/tokio-rs/axum/issues/50#issuecomment-1015823343 though I have many other reasons to why I think generating from code is superior.

Sytten commented 2 years ago

I feel an integration with https://github.com/juhaku/utoipa would be sufficient for most people who like the code first approach.

alex-hunt-materialize commented 2 years ago

developers could focus on providing business logic.

@dawid-nowak, the request handler IS the business logic. It is totally backwards to start from the OpenAPI spec, since your requirements are guaranteed change over time. I want to spend my time thinking about what I'm trying to do, not how to specify that in 1000 lines of yaml.

If you start from the OpenAPI spec, you end up in one of two situations:

  1. You regenerate the code, and you've overwritten the code you had before. You now have to manually copy/paste things from the git diff (I assume you committed your previous code) for all the things that you had previously implemented.
  2. You regenerate a trait, and now you have to re-specify in code all the stuff you already specified in the OpenAPI spec. This is massively better than regenerating the code directly, but you still just wasted time duplicating things and potentially getting out of sync again.

If you start from the OpenAPI spec, your clients might get updated before your server. That can't happen if the server code is the source of the spec.

If you start from the code, then your API will always be in sync, and you only have to specify things once. No need to learn a totally new type system (because that's what OpenAPI is) before you can implement your handler. It is overwhelming easier to specify type information in Rust than in OpenAPI.

Rust has more type knowledge than most languages. We can do better than re-specifying things that the type system already knows. It's one of the defining features of the language. We're Rust programmers, not OpenAPI programmers.

90% (yes, I made this number up, since it's actually 100% of the ones I've interacted with and I'm trying to be charitable) of the time, when people start with the OpenAPI spec, they use a single type for both the request and the response. This is almost always incorrect. Most of the time, there are optional fields in the request which are required in the response (or vice-versa), so they just mark everything as optional. This makes the generated code terrible, especially in Rust where everything will be wrapped as Option<T>. People make this mistake not because they are stupid, but because it is easier to write. If you start with the Rust code, you are far less likely to make this mistake, since the easy way is to just specify both types separately. The path of least resistance is also the path to the more correct implementation, where the type system can help force you to build correct responses and validate your inputs.

In Rust, I don't have to think about if something is required, since that's the default. If something is Optional, that's in your face all the time. Reversing the default from OpenAPI is a huge reduction in mistakes.

dawid-nowak commented 2 years ago

Great conversation :) All valid reasons but then.

Am I really that interested in bootstrapping my project N-times using tried and tested copy and paste technique and re-using stuff that since it worked once it must be still good.

Or wouldn't it be nicer to write my interfaces in OpenAPI spec and have a tool to bootstrap the whole project for me using state of the art and best in class idiomatic Rust code and generate all the appropriate objects so as a developer I can just provide business logic. Imagine your speed to market :)

Let's assume I have annotated my code and then what happens ?

  1. I suppose i need to make it available to others. Should I expose it in my production environment to unauthorized access ? At least that would guarantee that my code matches the defined interface.
  2. Or create a sandbox? But then the one to one mapping between the application in production and documentation available in sandbox is gone.
  3. Or expose it elsewhere ? But then again who guarantees that that what is documented matches the production?
  4. And then there is a question of adding all that additional dependencies to generate the documentation ?
  5. And security of those additional annotations ?

On the contrary, with the OpenAPI first approach, one could publish the spec in a public repository, the production pipeline could take the spec and consistently generate the server code and merge it with the business logic code. If that process fails we know that there is a mismatch between business logic and the interfaces and we don't don't deploy. Otherwise we have a relatively strong guarantee the documentation matches the APIs implemented.

dawid-nowak commented 2 years ago

@alex-hunt-materialize Some thoughts

If you start from the OpenAPI spec, your clients might get updated before your server. That can't happen if the server code is the source of the spec.

Hmm, what could happen is that your clients are out of sync with your server and you have hundreds of angry clients :) Just because you have deployed latest and greatest doesn't mean that everyone has updated their client code.

If you start from the code, then your API will always be in sync, and you only have to specify things once. No need to learn a totally new type system (because that's what OpenAPI is) before you can implement your handler. It is overwhelming easier to specify type information in Rust than in OpenAPI.

You are right that if I expose the documentation as some sort of web page on the server that has annotated code then it is going to be in sync. And only then. And I think you could easily revert that logic. If I generate my server logic from OpenAPI and then merge it with my business logic and it does compile and passes all the tests then I should have a pretty good guarantee that my code matches the specified interface before I deploy it to the production environment.

imbolc commented 2 years ago

Or wouldn't it be nicer to write my interfaces in OpenAPI spec and have a tool to bootstrap the whole project for me using state of the art and best in class idiomatic Rust code and generate all the appropriate objects so as a developer I can just provide business logic. Imagine your speed to market :)

Even with the code-first approach we could have this bootstrapper as a separate tool to achieve a similar speed-to-market. Though, I don't see how describing api in yaml or json would be faster than using Rust syntax. At least schemars syntax is significantly more compact than resulting json-schema document.

imbolc commented 2 years ago

Or wouldn't it be nicer to write my interfaces in OpenAPI spec

I would agree with this in a sense that it's better to describe open-api related logic declaratively, the layer would not only generate the schema document, but also ensure correct validation etc. But again nothing prevents us from using Rust, here's how I do it now using schemars and jsonschema:

pub trait SchemaValidation: Sized + JsonSchema + Serialize {
    /// JSON schema document
    fn schema_document() -> &'static Value {
        static CACHE: OnceCell<Value> = OnceCell::new();
        CACHE.get_or_init(|| {
            serde_json::to_value(schema_for!(Self))
                .unwrap_or_else(|_| panic!("{}::schema_document", type_name::<Self>()))
        })
    }

    /// Compiled JSON schema
    fn schema_compiled() -> &'static JSONSchema {
        static CACHE: OnceCell<JSONSchema> = OnceCell::new();
        CACHE.get_or_init(|| {
            jsonschema::JSONSchema::compile(Self::schema_document())
                .unwrap_or_else(|_| panic!("{}::schema_compiled", type_name::<Self>()))
        })
    }

    /// Performs validation and returns first error as a string containing buggy fields path
    fn schema_validate(&self) -> Result<(), String> {
        if let Err(Some(err)) = Self::schema_compiled()
            .validate(&serde_json::to_value(self).unwrap())
            .map_err(|mut err| err.next())
        {
            Err(format!("Validation error: {}", err.instance_path))
        } else {
            Ok(())
        }
    }
}
juhaku commented 2 years ago

Just a side note, I wrote an example axum with utoipa. You may find it here: https://github.com/juhaku/utoipa/tree/master/examples/todo-axum.

Those who want a deeper integration between axum and utoipa just hit the thumbs up and it may find its way to the project's kanban board at some point. However the integration support could be to the level of resolving path and query parameters as well as default response (for status 200 maybe?) when I get that far in other frameworks as well. For what comes to the actual path and path operation type there is no way to access that information with the current desing easily. Unless the integration would be done completely from axums side?

The current example shows the support with raw utoipa and utoipa-swagger-ui.

davidpdrsn commented 2 years ago

An update for those wondering what the status of this is:

I don't currently have bandwidth to take this on so haven't made any progress. The approach I explored in https://github.com/tokio-rs/axum/pull/459 is still the way I'd go. Basically inferring things from the types instead of using macros. That should give greater flexibility and work better with IDEs.

https://github.com/tokio-rs/axum/pull/945 is another slightly different approach at does use some macros. I'd still wanna explore a macro free solution first before merging something like https://github.com/tokio-rs/axum/pull/459.

Regardless which approach one picks it should be doable in a separate crate and shouldn't depend on any axum internals. If someone has started working on something please share it! I will gladly provide feedback 😊

tasn commented 2 years ago

Thanks for the update David! I think #945 is waiting on your feedback. Even if it shouldn't be merged, maybe it can be a good start of an external crate.

davidpdrsn commented 2 years ago

I posted https://github.com/tokio-rs/axum/pull/945#issuecomment-1132030318 just now :)

davidpdrsn commented 2 years ago

I generally don't like having issues open that aren't actionable so I think I'll close this for now.

Sytten commented 2 years ago

People will still comment on it even if you close it. Better to transfer it to a discussion or tag in properly. The fact is this is still an issue even if not actionable at the moment.

davidpdrsn commented 2 years ago

I mean people are free to still comment but its not something we'll address inside axum anytime soon so I don't see why we should keep it open. If one day we decide to build something then we can just re-open it.

junderw commented 2 years ago

aide works pretty well!

https://docs.rs/aide/latest/aide/

Perhaps something like this can be merged upstream?

banool commented 1 year ago

Personally this idea of closing issues because there are no plans to address them soon makes little sense to me, just apply an aspirational label or something, but so be it. Given the issue is now closed, where can we track progress on this work (whenever if eventuates) if not this issue? Should we open a topic in Discussions instead?

gagbo commented 1 year ago

aide seems to be at least "maintained" (probably "actively developed"), I've been looking into updating it to axum 0.6 (non-RC) to see if it fits my use cases, but from my couple hours of search, it seems the best solution to me to have API doc in axum

davidpdrsn commented 1 year ago

@banool You can track the work wherever it's being implemented.

I understand the frustration but I stand by the decision to close this issue. I don't think it fair that things we don't intend to implement (for now at least) should remain as open issues. What makes openapi special in this regard? There are many things we could do but choose not to. Should they get "aspirational" issues as well?

I've looked a bit at aide and think it looks very promising! Seems to be close to the design I had in mind. I recommend you try that and track improvements to it in their repo.

davidpdrsn commented 1 year ago

And you're free to open a Discussion if you want 😊

banool commented 1 year ago

@davidpdrsn I understand the frustration on your side too with wanting to keep a clean issue tracker, I just wonder where we are meant to be able to keep an eye on the roadmap? I work on open source projects too and I agree it's there is no obvious solution here. In my case I usually opt for issues, even for things we're not planning to do soon, since that's where most people go looking. So there is nothing special about this feature request in particular, I would open aspirational issues for everything the project might get to at some point since it's a good way to crowdsource ideas, gauge community interest, etc.

Of course run the project in the way that works best for you, just my 2 cents.

Eugeny commented 1 year ago

My two cents, I've worked on much bigger projects, and from my experience, if you keep closing issues that don't belong on the roadmap, all you accomplish is people opening more duplicates. There's no "inbox zero" for issue trackers.

davidpdrsn commented 1 year ago

I just wonder where we are meant to be able to keep an eye on the roadmap?

The issues are the roadmap. Currently openapi support is not on the roadmap to be built into axum so there isn't an open issue for it.

if you keep closing issues that don't belong on the roadmap, all you accomplish is people opening more duplicates.

That's not my experience so far. This issue still shows up in search and it isn't locked so people can still comment.

There's no "inbox zero" for issue trackers.

It's not about inbox zero. It's about setting expectations. I don't want people to expect that since there is an open issue about openapi that means we are working on it. I often see people commenting on issues asking what the progress is, because they expect an issue means someone is working on it.

DaAitch commented 1 year ago

Against a lot people here, I would like to campaign for a contract-first and code generation approach, because of the following reasons:

  1. You are in full control of the contract, that means whatever you want it to be, you write it and check it in and anybody can read and download it from your repository, without generating it or reading Rust code.
  2. You don't spoil your code with meta data. Most of the examples don't include important API documentation and sometimes you also need your contract to be given special tags, e.g. microservice identification or issuer name for company compliance reasons, that every API has to have that documentation. You don't want to read that in your code.
  3. Writing idiomatic Rust code will not allow you to generate a good contract, because for example it's not possible to derive valid status codes from code, while the other way round generating code and only allowing to return an enum is possible. Same for fields for example that are strings, but should have "email" format. Generating code, that allows the implementor to only do it right is possible, while generating a the contract for this from Rust code would be only possible if you use special annotations and then also validation is up to you and that's very error-prone.
  4. Extending the contract will not make your server compile anymore, unless you implement it => non breaking change
  5. Generating a client and writing some integration tests may prove you do not break your clients.

Finally the fact, that gRPC is also designed to be contract-first and you generate your Rust code or client. Would you also like to generate your .proto files from Rust code? What's different to OpenAPI/Swagger?

A year ago I started with sul which generates a tower::Service from an OpenAPI contract. I still like the approach operating at tower level, that it can be bridged to any server impl, but I think I would change it now to how prost does it contract => struct MyServer (impl tower::Service) + async_trait MyService.

Eugeny commented 1 year ago

IMO contract-first approach has its right to exist and doesn't require explicit support from the framework anyway (except a code generation tool, which can be external).

However at the end of the day, you need your code to match up with the contract, so in the end it just means doing the same work twice.

adriangb commented 1 year ago

I think an interesting approach to e.g. avoid accidentally breaking the contract by making a code change is to export that contract statically, commit it to your repo and snapshot test against it. That also means someone can read it without compiling and running the code. That contract could even be some sort of contract test thing (more complex than an openapi.json file).

DaAitch commented 1 year ago

However at the end of the day, you need your code to match up with the contract, so in the end it just means doing the same work twice.

Why "twice"? The amount of overlapping work is just a mapping and impl and contract is always in sync, verified by the compiler.

We can also use serde_json, serde_xml, ... depending on content type of the contract and much more. I doubt it's possible to generate such contracts from code.

AThilenius commented 1 year ago

@DaAitch I love the idea!

That would fit our use case fantastic. I absolutely love Axum's extractor approach, it's been a real joy to work with because it's so flexible and modular and plays into Rust's strong type system. We've also been using Utoipa purely to generate frontend client code. Without meaning any disrespect to the authors of that library, it's way too much boilerplate and headache for our use, so here I am again, scouring the internet for a simple RPC solution with code client codegen. But I've kind of fallen for Axum, and having plain old HTTP JSON requests is just so easy to debug from browsers, and is ubiquitously supported. I just want strongly typed client code (for a subset of routes) 😭

A few questions come to mind.

Edit: Also, while this almost certainly can/should be a separate tool, I think having the discussion here has the advantage of letting Axum authors chime in on typings (like ideas on how to play nice with Extractors). I'm not advocating that Axum itself support this.

yellowred commented 1 year ago

I would discourage from using protobuf for schemas definition as it has 2 major disadvantages:

  1. It requires compilation of manifests.
  2. It has limited type system.

The best approach would be to use schemas written in Rust.