Open analogrelay opened 1 month ago
cc @heaths - Sorry, I keep hitting these fun issues that feel like they have 4 different solutions 😩 .
I'm going to sketch out Option 3 so we can see the full impact. I think it would be pretty smooth.
Regarding 1, I thought we provided a way to get the raw payload already, so there's already a way for them to deserialize from a stream, no? Seems a better approach would be another function where they could pass any serde::Deserialize
-implementing type and use serde_json
, serde_xml
, or whatever they need to deserialize from an async reader, couldn't they?
This seems a much simpler approach for them and us, and at least congruent with some of our other languages that either return you a T
or provide a means to dictate deserialization (via attributes, callback, whatever) yourself.
Regarding 1, I thought we provided a way to get the raw payload already, so there's already a way for them to deserialize from a stream, no?
Yes, you could certainly call into_body().json().await
on the response yourself and deserialize it into any type without having to implement azure_core::Model
This seems a much simpler approach for them and us, and at least congruent with some of our other languages that either return you a T or provide a means to dictate deserialization (via attributes, callback, whatever) yourself.
True, but it seems surprising to me that we would provide a means to call an API like, say, secret_client.get_secret()
that returns a T
, but cosmos_container_client.read_item(...)
would return raw bytes that I have to deserialize myself (even if there are easy means to do that. The .NET SDK certainly performs deserialization for you, as does the Java SDK. The JavaScript and Python ones effectively return the raw JSON but in forms that make sense to that language (an Object in JS and a Dict in Python). The Go SDK is the only one that currently returns raw bytes, and maybe that's the one we should be comparing to 🤷🏻♀️ .
I worry that we're forcing more verbose syntax than we need to, in service of being more explicit. I think there's got to be an approach that allows us to balance concise syntax with explicitness. I also recognize that I've done my fair share of work to get us to this situation 😅. Working on doctests in particular has started to make this clear. It feels like common operations take way more code than they should. For example, compare reading the metadata of a Container in .NET:
var response = await containerClient.ReadAsync();
Console.WriteLine($"Id: {response.Container.Id}");
With the equivalent code in Rust:
let container = container_client.read().await?.deserialize_body().await?;
println!("Id: {}", container.id);
With even a little bit of nesting, this easily hits cargo fmt
's line limit and starts wrapping:
It's not that bad, and maybe I'm overly concerned with verbosity here, but looking at some of the options for handling user-provided items in Cosmos, it keeps getting worse.
Here's .NET reading a Cosmos item:
var response = await containerClient.ReadItemAsync("itemId", "partitionKey");
Console.WriteLine($"Product Name: {response.Resource.ProductName}");
And Rust doing the same, using the Item<T>
wrapper newtype I'm currently using in #1830 .
let item = container_client.read_item("itemId", "partitionKey", None).await?.deserialize_body().await?.into_inner();
println!("Product Name: {}", item.product_name);
Or, using the ability to deserialize the body manually:
let item = container_client.read_item("itemId", "partitionKey", None).await?.into_body().json().await;
println!("Product Name: {}", item.product_name);
Not only does Rust have longer method chains (which I can accept, as it's trying to be as opt-in as possible), but the methods are difficult to discover without narrative docs and examples. Having good examples is important, but all I did to "generate" the .NET examples was look at IntelliSense/Reference Docs, and it was very intuitive what to do. The fact that the Cosmos .NET SDK has
Maybe these problems are inherent to the goals we have with the Rust SDK (for example, the .NET SDK is eagerly deserializing, which may not be appropriate), I don't know. It just feels clunky and verbose to me, and I'm trying to puzzle through how we can smooth the experience out. It might be as simple as renaming some methods or bundling some method chains into single methods. Maybe we go back to something like a Response<T>::json()
method that requires users to know the serialization format?
A lot of this verbosity comes from serving two main goals:
Are either of those goals in the common path? I suspect they aren't. Perhaps we need to rethink things a bit and see if there are ways that we can make the default experience concise and push the verbosity towards cases where users want to get that additional control.
I guess I'm just kinda lamenting the verbosity we have, reflecting on my own role in adding to it, and wondering how necessary it really is. I don't yet have a specific suggestion, but I'm going to keep churning on it a bit.
Rust is already more verbose than .NET, but what forced us into situations like send()?.await.deserialize()?.await
has more to do with ownership. Comparing to a garbage-collected language is applies to oranges: idiomatic lifetimes to send the response and then pass ownership to deserialize basically force us into this. It's not conceptually impossible to combine sending the request and deserializing the response, but then we provide no means to get a raw response when people need it unless we copy memory like Go does, which I absolutely want to avoid. Rust has a lot of potential targets where memory and performance are critical and a GC like that of Go (unless you use TinyGo, which is more of a transpiler, IMO) negates a lot of those benefits.
So we could revisit the deserialization stack all up, but we'll likely end up with separate methods to return a raw response vs. a T
and I don't think that's any better.
As for wrapping lines: seems the norm with Rust already. Mind you, I don't want to make things harder than they are, but line length is not really a concern especially when it's popular to use iterators and LINQ-like call chains (which most people wrap in .NET if it doesn't automatically anyway).
But back to the main problem: I may be confused. If you're thinking about get_secret()
as an example, we know it will always return a Secret
model. But I thought what you were talking about is that someone has arbitrary JSON in Cosmos with an unknown schema and wants to deserialize that. Is that correct? Because that's different. Comparing again to Key Vault, a secret could indeed contain JSON (and we actually recommend that over fetching a bunch of secrets) but we don't provide deserialization helpers for that. I'm not opposed, mind you, but I don't think they should be methods on Response<T>
or T
or whatever: I think they'll apply far less often than anticipated and just cause more confusion.
So even in the case of a secret that contains some arbitrary user schema (could also be XML, TOML, or whatever they want), they have to get the string value and then deserialize that into their own models e.g.,
let secret = client.get_secret("name", None, None).await?;
let config = serde_json::from_str(&secret.value())?;
Or is that not the case? My thoughts were based on that the content of the model you're getting (which, I believe, is always JSON, yes?) that the user would always need to provide a serde::Deserialize
-implementing model.
But back to the main problem: I may be confused. If you're thinking about get_secret() as an example, we know it will always return a Secret model. But I thought what you were talking about is that someone has arbitrary JSON in Cosmos with an unknown schema and wants to deserialize that. Is that correct? Because that's different. Comparing again to Key Vault, a secret could indeed contain JSON (and we actually recommend that over fetching a bunch of secrets) but we don't provide deserialization helpers for that.
This scenario is somewhat different from your example with a secret value. Cosmos is a JSON database, storing user-specified JSON payloads. Key Vault Secrets are a database of strings, which can store any value. Cosmos imposes very few limitations on the shape of the JSON value (must be a JSON object with an id
property of type string), but the user can and will store any arbitrary JSON objects they want in the database. Despite being schema-less though, it is very explicitly a JSON store designed to hold objects serialized/deserialized to JSON. This is why all our other languages (with the exception of Go and Python) make these APIs generic so that we can deserialize into a user's arbitrary object.
Having said all that, I've become reasonably convinced that Cosmos is a bit of an outlier here in having significant APIs that, in all SDKs other than Go (because of limitations with json.Unmarshal
), return completely user-defined types. In addition, @Pilchie reminded me that many of our APIs that return items can be configured (through request headers) to either return the item or return no body at all. Given that, I think we need our wrapper type (Item<T>
) anyway (and have, in fact, updated #1830 to make that type Option-like) and having to define this type to implement azure_core::Model
on the user's behalf isn't as big an issue.
I'm still concerned we're over-focused on some cases that aren't going to be the primary use-case, but I'm happy to leave that issue for now.
I did do a little experimenting, and will leave some notes here for posterity's sake, but at this point we're not blocked and I'm comfortable withdrawing this concern.
If we're willing to have our service client methods that return a custom type implementing IntoFuture
(note that this would be a single custom IntoFuture
type, not like the legacy SDK that returns a specialized builder/future type for each operation, which I'm not a fan of), it would be possible to keep the default case concise while having the flexibility we need using APIs like this:
// == Default case: Eagerly read the body in a single await ==
let response = secret_client.get_secret("foo").await?;
println!("Secret {} has value {}", response.body.name, response.body.value);
// == Custom case: Read the body into a custom type, but still eagerly ==
let response: Response<MyCustomSecretType> = secret_client.get_secret("foo").into_custom().await?;
println!("Secret {} has value {}", response.body.my_secret_name, response.body.my_secret_value);
// == Lazy case: Don't read the body eagerly ==
// response is a "LazyResponse<T>" that defers fetching and deserialization
let response = secret_client.get_secret("foo").into_lazy().await?;
let body = response.into_body().await?;
println!("Secret {} has value {}", body.name, body.value);
// == Lazy custom case ==
// response is a "LazyResponse<T>" that defers fetching and deserialization
let response = secret_client.get_secret("foo").into_lazy().await?;
let body: MySecretResponse = response.into_custom_body().await?; // name could be changed
// == Raw bytes case ==
let response = secret_client.get_secret("foo").into_raw().await?;
let body = response.body; // returns the PinnedStream directly
// Do something with the stream.
Basically, by adding "decorators" to the IntoFuture
type that return alternate futures before you've actually run the request (by await
ing), you can change how the response is read without making the "default" case unnecessarily verbose. But I recognize that it's quite a bit of IntoFuture
trickery. Also, we'd probably need to store some boxed closures in these types unless we want to go all the way down the path of having custom IntoFuture
structs for each operation (which I'd prefer to avoid).
Ignoring the names (though, how many of these could just be into
even if not implementing Into<T>
?), I like the possibilities here. The only one I'm most "concerned" about is the last one, having to declare body: PinnedStream
i.e., that callers have to know what a PinnedStream
is. What is body
declared as in your examples? Probably should be a method as well. I'd prefer we don't expose fields for low-level constructs - just higher-level models.
Ignoring the names (though, how many of these could just be into even if not implementing Into
?)
I agree the names are just a first-draft at best, I also considered things like with_lazy
, or with_raw
, but we can bikeshed all that separately.
Using into
doesn't work as well because it requires type annotations and named types on the left side. For example, replacing into_lazy
with into
would have to look something like this:
let response: LazyResponse<Secret> = secret_client.get_secret("foo").into().await?;
I'm not opposed to that, but I think these are cases where you want to "select" the target type via a method on the right and let the left-hand side be inferred.
The only one I'm most "concerned" about is the last one, having to declare body: PinnedStream i.e., that callers have to know what a PinnedStream is.
Sorry, that type annotation was just intended to illustrate the return type. The return type of into_raw
is concrete (no type inference is needed) so the left-hand side doesn't actually need the type annotation. I should have reframed it (and am editing the earlier comment to clarify).
What is body declared as in your examples? Probably should be a method as well. I'd prefer we don't expose fields for low-level constructs - just higher-level models.
Under this approach, the status, headers and body are now just "plain old data", so we could just treat it like we treat models and use pub
fields. Having said that, I'm quite happy to have them be methods instead.
I can work this up in a way that's a little more complete if you like the approach. In order for it to be a single ResponseFuture
type that doesn't require each operation to derive it's own, it'll have to use some boxed closures/dynamic dispatch logic because the service client needs to pass in a closure to perform the actual request, so that the ResponseFuture
can delay executing it until it gets await
ed.
I've opened https://github.com/Azure/azure-sdk-for-rust/pull/1834, which is still draft for now to avoid spamming all the reviewers, with an implementation of this option.
In working on the Cosmos DB APIs, I've identified something annoying about the currently approach we're taking to deserialization. Right now, to make
Response<T>
deserializable in either JSON or XML depending on what the service-client wants, we use a trait:azure_core::Model
which looks like this:We couple this with a derive macro that allows you to set the "format" to use:
Or, to use XML:
In effect, we tie the serialization format to the type being deserialized. This is fine when we control all the types being deserialized (for example, when deserializing service API responses). We can derive this trait and users don't need to see anything related to the serialization format.
The problem arises when we need to support deserializing entirely user-authored types. We support this in two major scenarios:
Using a
Response::deserialize_body_into()
helper, we allow a user to deserialize any service response into their own type. To do this, a user must deriveazure_core::Model
themselves. But this isn't too onerous a requirement if the user is providing a custom model to deserialize Azure responses.Some APIs, like Cosmos, provide REST APIs that return data structures that are entirely user-controlled. For example, reading an item from a Cosmos container, the API returns the user's JSON document as-is. We don't really want users to have to implement
azure_core::Model
in this case. They'd have to add a direct dependency ontypespec_client_core
to do that, and it generally just seems undesirable to force the user to implement our own trait when they already implementserde::Deserialize
. Today, in #1830 , we work around this by returningazure_data_cosmos::models::Item<T>
, whereItem
is a simple newtype that handles implementingazure_core::Model
for the user:This type is largely useless to the user. It provides no extra data and serves only to "tag" the type with JSON deserialization logic. Meanwhile, the user has to unwrap this type to get at their user-specific type, making an already-long chain of calls and
.await
s even longer.I've got a couple thoughts on what we could do here:
Nothing. We accept that we need to use "newtypes" for user-defined types, or users have to derive
azure_core::Model
We use the
Content-Type
response header to drive deserialization, perhaps with some kind of override from the service client. This isn't my preferred approach, as it depends on services always returning known content types (not guaranteed) and matching against a string to parse (not ideal)We use a fixed
SerializationFormat
enum to specify the format for aResponse<T>
. The service client sets this value somehow (it could be a parameter totypespec_client_core::http::Pipeline::send
, or something they add to the response after sending. The deserialize logic inResponse<T>
would look at this value. If we want to add new formats, we'd have to add to this enum, but it seems like the list of supported formats is very stable.We add a new type parameter to
Response
:Response<T, F>
, whereT: Deserialize
andF: SerializationFormat
.SerializationFormat
would be a trait we create to define a serialization format. This type controls deserialization, but the "body" typeT
is now free to just be a standardDeserialize
. This is a fairly tidy approach except that it can pollute the type names quite a bit. We can use defaults to make that less impactful, but it still means the serialization "format" would appear in the type name for some APIs (technically it's always there just not named):Arc<dyn SerializationFormat>
(or even&'static dyn SerializationFormat
, since each format would only need one instance) on theResponse<T>
. This is basically the same as option 3, just using a trait object instead of a fixed enum, which could allow for new formats to be added more easily.Of these options, honestly, I think 3 is a pretty reasonable choice. If we make the enum
#[non_exhaustive]
then we can safely add new ones later. The only place that should bematch
ing against it is ourResponse<T>::deserialize_body
code anyway. Adding an extramatch
todeserialize_body
doesn't seem like a performance risk worth digging in to since we're dealing with network I/O here anyway. I'd propose either having thetypespec_client_core::http::Pipeline::send
method accept it as a parameter, or have the pipeline accept it as a constructor argument. Or perhaps a hybrid of that where the pipeline has it as a constructor argument to set a default and there's asend_with_format
"override" that allows you to use a different format than the default (which would be rare).