Closed withoutboats closed 5 years ago
The discussion here seems to have died down, so linking it here as part of the await
syntax question: https://internals.rust-lang.org/t/explicit-future-construction-implicit-await/7344
Implementation is blocked on #50307.
About syntax: I'd really like to have await
as simple keyword. For example, let's look on a concern from the blog:
We aren’t exactly certain what syntax we want for the await keyword. If something is a future of a Result - as any IO future likely to be - you want to be able to await it and then apply the
?
operator to it. But the order of precedence to enable this might seem surprising -await io_future?
wouldawait
first and?
second, despite?
being lexically more tightly bound than await.
I agree here, but braces are evil. I think it's easier to remember that ?
has lower precedence than await
and end with it:
let foo = await future?
It's easier to read, it's easier to refactor. I do believe it's the better approach.
let foo = await!(future)?
Allows to better understand an order in which operations are executed, but imo it's less readable.
I do believe that once you get that await foo?
executes await
first then you have no problems with it. It's probably lexically more tied, but await
is on the left side and ?
is on the right one. So it's still logical enough to await
first and handle Result
after it.
If any disagreement exist, please express them so we can discuss. I don't understanda what's silent downvote stands for. We all wish good to the Rust.
I have mixed views on await
being a keyword, @Pzixel. While it certainly has an aesthetic appeal, and is perhaps more consistent, given async
is a keyword, "keyword bloat" in any language is a real concern. That said, does having async
without await
even make any sense, feature wise? If it does, perhaps we can leave it as is. If not, I'd lean towards making await
a keyword.
I think it's easier to remember that
?
has lower precedence thanawait
and end with it
It might be possible to learn that and internalise it, but there's a strong intuition that things that are touching are more tightly bound than things that are separated by whitespace, so I think it would always read wrong on first glance in practice.
It also doesn't help in all cases, e.g. a function that returns a Result<impl Future, _>
:
let foo = await (foo()?)?;
The concern here is not simply "can you understand the precedence of a single await+?
," but also "what does it look like to chain several awaits." So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?
.
A summary of the options for await
syntax, some from the RFC and the rest from the RFC thread:
await { future }?
or await(future)?
(this is noisy).await future?
or (await future)?
does what is expected (both of these feel surprising).await? future
(this is unusual).await
postfix somehow, as in future await?
or future.await?
(this is unprecedented).?
did, as in future@?
(this is "line noise").That said, does having
async
withoutawait
even make any sense, feature wise?
@alexreg It does. Kotlin works this way, for example. This is the "implicit await" option.
@rpjohnst Interesting. Well, I'm generally for leaving async
and await
as explicit features of the language, since I think that's more in the spirit of Rust, but then I'm no expert on asynchronous programming...
@alexreg async/await is really nice feature, as I work with it on day-to-day basis in C# (which is my primary language). @rpjohnst classified all possibilities very well. I prefer the second option, I agree on others considerations (noisy/unusual/...). I have been working with async/await code for last 5 years or something, it's really important to have such a flag keywords.
@rpjohnst
So even if we just picked a precedence, we would still have the problem of await (await (await first()?).second()?).third()?.
In my practice you never write two await
's in one line. In very rare cases when you need it you simply rewrite it as then
and don't use await at all. You can see yourself that it's much harder to read than
let first = await first()?;
let second = await first.second()?;
let third = await second.third()?;
So I think it's ok if language discourages to write code in such manner in order to make the primary case simpler and better.
hero away future await?
looks interesting although unfamiliar, but I don't see any logical counterarguments against that.
In my practice you never write two
await
's in one line.
But is this because it's a bad idea regardless of the syntax, or just because the existing await
syntax of C# makes it ugly? People made similar arguments around try!()
(the precursor to ?
).
The postfix and implicit versions are far less ugly:
first().await?.second().await?.third().await?
first()?.second()?.third()?
But is this because it's a bad idea regardless of the syntax, or just because the existing await syntax of C# makes it ugly?
I think it's a bad idea regardless of the syntax because having one line per async
operation is already complex enough to understand and hard to debug. Having them chained in a single statement seems to be even worse.
For example let's take a look on real code (I have taken one piece from my project):
[Fact]
public async Task Should_UpdateTrackableStatus()
{
var web3 = TestHelper.GetWeb3();
var factory = await SeasonFactory.DeployAsync(web3);
var season = await factory.CreateSeasonAsync(DateTimeOffset.UtcNow, DateTimeOffset.UtcNow.AddDays(1));
var request = await season.GetOrCreateRequestAsync("123");
var trackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, Request.TrackableStatuses.First(), "Trackable status");
var nonTrackableStatus = new StatusUpdate(DateTimeOffset.UtcNow, 0, "Nontrackable status");
await request.UpdateStatusAsync(trackableStatus);
await request.UpdateStatusAsync(nonTrackableStatus);
var statuses = await request.GetStatusesAsync();
Assert.Single(statuses);
Assert.Equal(trackableStatus, statuses.Single());
}
It shows that in practice it doesn't worth to chain await
s even if syntax allows it, because it would become completely unreadable await
just makes oneliner even harder to write and read, but I do believe it's not the only reason why it's bad.
The postfix and implicit versions are far less ugly
Possibility to distinguish task start and task await is really important. For example, I often write code like that (again, a snippet from the project):
public async Task<StatusUpdate[]> GetStatusesAsync()
{
int statusUpdatesCount = await Contract.GetFunction("getStatusUpdatesCount").CallAsync<int>();
var getStatusUpdate = Contract.GetFunction("getStatusUpdate");
var tasks = Enumerable.Range(0, statusUpdatesCount).Select(async i =>
{
var statusUpdate = await getStatusUpdate.CallDeserializingToObjectAsync<StatusUpdateStruct>(i);
return new StatusUpdate(XDateTime.UtcOffsetFromTicks(statusUpdate.UpdateDate), statusUpdate.StatusCode, statusUpdate.Note);
});
return await Task.WhenAll(tasks);
}
Here we are creating N async requests and then awaiting them. We don't await on each loop iteration, but firstly we create array of async requests and then await them all at once.
I don't know Kotlin, so maybe they resolve this somehow. But I don't see how you can express it if "running" and "awaiting" the task is the same.
So I think that implicit version is a no-way in even much more implicit languages like C#.
In Rust with its rules that doesn't even allow you to implicitly convert u8
to i32
it would be much more confusing.
@Pzixel Yeah, the second option sounds like one of the more preferable ones. I've used async/await
in C# too, but not very much, since I haven't programmed principally in C# for some years now. As for precedence, await (future?)
is more natural to me.
@rpjohnst I kind of like the idea of a postfix operator, but I'm also worried about readability and assumptions people will make – it could easily get confused for a member of a struct
named await
.
Possibility to distinguish task start and task await is really important.
For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.
Your example would look something like this:
pub async fn get_statuses() -> Vec<StatusUpdate> {
// get_status_updates is also an `async fn`, but calling it works just like any other call:
let count = get_status_updates();
let mut tasks = vec![];
for i in 0..count {
// Here is where task *construction* becomes explicit, as an async block:
task.push(async {
// Again, simply *calling* get_status_update looks just like a sync call:
let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
});
}
// And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
join_all(&tasks[..])
}
This is what I meant by "for this to work, the act of constructing a future must also be made explicit." It's very similar to working with threads in sync code- calling a function always waits for it to complete before resuming the caller, and there are separate tools for introducing concurrency. For example, closures and thread::spawn
/join
correspond to async blocks and join_all
/select
/etc.
For what it's worth, the implicit version does do this. It was discussed to death both in the RFC thread and in the internals thread, so I won't go into a lot of detail here, but the basic idea is only that it moves the explicitness from the task await to task construction- it doesn't introduce any new implicitness.
I believe it does. I can't see here what flow would be in this function, where is points where execution breaks until await is completed. I only see async
block which says "hello, somewhere here there are async functions, try to find out which ones, you will be surprised!".
Another point: Rust tend to be a language where you can express everything, close to bare metal and so on. I'd like to provide some quite artificial code, but I think it illustrates the idea:
var a = await fooAsync(); // awaiting first task
var b = barAsync(); //running second task
var c = await bazAsync(); // awaiting third task
if (c.IsSomeCondition && !b.Status = TaskStatus.RanToCompletion) // if some condition is true and b is still running
{
var firstFinishedTask = await Task.Any(b, Task.Delay(5000)); // waiting for 5 more seconds;
if (firstFinishedTask != b) // our task is timeouted
throw new Exception(); // doing something
// more logic here
}
else
{
// more logic here
}
Rust always tends to provide full control over what's happening. await
allow you to specify points where continuation process. It also allows you to unwrap
a value inside future. If you allows implicit conversion on use side, it has several implications:
Future<T>
or awaited T
itself. It's not an issue with keywords - it it exists, then result isT
, otherwise it's Future<T>
get_status_updates
line, but it doesn't on get_status_update
. They are quite similar to each other. So it's either doesn't work the way original code was or it's so much complicated that I can't see it even when I'm quite familiar with the subject. Both alternatives don't make this option a favor.I can't see here what flow would be in this function, where is points where execution breaks until await is completed.
Yes, this is what I meant by "this makes suspension points harder to see." If you read the linked internals thread, I made an argument for why this isn't that big of a problem. You don't have to write any new code, you just put the annotations in a different place (async
blocks instead of await
ed expressions). IDEs have no problem telling what the type is (it's always T
for function calls and Future<Output=T>
for async
blocks).
I will also note that your understanding is probably wrong regardless of the syntax. Rust's async
functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion
check will always pass. This was also discussed to death in the RFC thread, if you're interested in why it works this way.
In you example I don't see why it does interrupt execution at
get_status_updates
line, but it doesn't onget_status_update
. They are quite similar to each other.
It does interrupt execution in both places. The key is that async
blocks don't run until they are awaited, because this is true of all futures in Rust, as I described above. In my example, get_statuses
calls (and thus awaits) get_status_updates
, then in the loop it constructs (but does not await) count
futures, then it calls (and thus awaits) join_all
, at which point those futures concurrently call (and thus await) get_status_update
.
The only difference with your example is when exactly the futures start running- in yours, it's during the loop; in mine, it's during join_all
. But this is a fundamental part of how Rust futures work, not anything to do with the implicit syntax or even with async
/await
at all.
I will also note that your understanding is probably wrong regardless of the syntax. Rust's async functions do not run any code at all until they are awaited in some way, so your b.Status != TaskStatus.RanToCompletion check will always pass.
Yes, C# tasks are executed synchronously until first suspension point. Thank you for pointing that out. However, it doesn't really matter because I still should be able to run some task in background while executing the rest of the method and then check if background task is finished. E.g. it could be
var a = await fooAsync(); // awaiting first task
var b = Task.Run(() => barAsync()); //running background task somehow
// the rest of the method is the same
I've got your idea about async
blocks and as I see they are the same beast, but with more disadvantages. In original proposal each async task is paired with await
. With async
blocks each task would be paired with async
block at construction point, so we are in almost same situation as before (1:1 relationship), but even a bit worse, because it feel more unnatural, and harder to understand, because callsite behavior becomes context-depended. With await I can see let a = foo()
or let b = await foo()
and I would know it this task is just constructed or constructed and awaited. If i see let a = foo()
with async
blocks I have to look if there is some async
above, if I get you right, because in this case
pub async fn get_statuses() -> Vec<StatusUpdate> {
// get_status_updates is also an `async fn`, but calling it works just like any other call:
let count = get_status_updates();
let mut tasks = vec![];
for i in 0..count {
// Here is where task *construction* becomes explicit, as an async block:
task.push(async {
// Again, simply *calling* get_status_update looks just like a sync call:
let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
});
}
// And finally, launching the explicitly-constructed futures is also explicit, while awaiting the result is implicit:
join_all(&tasks[..])
}
We are awaiting for all tasks at once while here
pub async fn get_statuses() -> Vec<StatusUpdate> {
// get_status_updates is also an `async fn`, but calling it works just like any other call:
let count = get_status_updates();
let mut tasks = vec![];
for i in 0..count {
// Isn't "just a construction" anymore
task.push({
let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
});
}
tasks
}
We are executing them one be one.
Thus I can't say what's exact behavior of this part:
let status_update: StatusUpdateStruct = get_status_update(i).deserialize();
StatusUpdate::new(utc_from_ticks(status_update.update_date), status_update.status_code, status_update.note))
Without having more context.
And things get more weird with nested blocks. Not to mention questions about tooling etc.
callsite behavior becomes context-depended
This is already true with normal sync code and closures. For example:
// Construct a closure, delaying `do_something_synchronous()`:
task.push(|| {
let data = do_something_synchronous();
StatusUpdate { data }
});
vs
// Execute a block, immediately running `do_something_synchronous()`:
task.push({
let data = do_something_synchronous();
StatusUpdate { data }
});
One other thing that you should note from the full implicit await proposal is that you can't call async fn
s from non-async
contexts. This means that the function call syntax some_function(arg1, arg2, etc)
always runs some_function
's body to completion before the caller continues, regardless of whether some_function
is async
. So entry into an async
context is always marked explicitly, and function call syntax is actually more consistent.
Regarding await syntax: What about a macro with method syntax? I can't find an actual RFC for allowing this, but I've found a few discussions (1, 2) on reddit so the idea is not unprecedented. This would allow await
to work in postfix position without making it a keyword / introducing new syntax for only this feature.
// Postfix await-as-a-keyword. Looks as if we were accessing a Result<_, _> field,
// unless await is syntax-highlighted
first().await?.second().await?.third().await?
// Macro with method syntax. A few more symbols, but clearly a macro invocation that
// can affect control flow
first().await!()?.second().await!()?.third().await!()?
There is a library from the Scala-world which simplifies monad compositions: http://monadless.io
Maybe some ideas are interesting for Rust.
quote from the docs:
Most mainstream languages have support for asynchronous programming using the async/await idiom or are implementing it (e.g. F#, C#/VB, Javascript, Python, Swift). Although useful, async/await is usually tied to a particular monad that represents asynchronous computations (Task, Future, etc.).
This library implements a solution similar to async/await but generalized to any monad type. This generalization is a major factor considering that some codebases use other monads like Task in addition to Future for asynchronous computations.
Given a monad
M
, the generalization uses the concept of lifting regular values to a monad (T => M[T]
) and unlifting values from a monad instance (M[T] => T
). > Example usage:lift { val a = unlift(callServiceA()) val b = unlift(callServiceB(a)) val c = unlift(callServiceC(b)) (a, c) }
Note that lift corresponds to async and unlift to await.
This is already true with normal sync code and closures. For example:
I see several differences here:
await
. With await
we don't have a context, with async
we have to have one. The former wins, because it provide the same features, but require knowing less about the code.async
functions may be quite big (as big, as regular functions) and complicated.then
calls, but it's await
is proposed for), async
blocks are nested frequently.One other thing that you should note from the full implicit await proposal is that you can't call async fns from non-async contexts.
Hmm, I didn't notice that. It doesn't sound good, because in my practice you often want to run async from non-async context. In C# async
is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way so async Task<Foo>
and Task<Foo>
are completely interchangeable, and it decouples implementation and API.
Sometimes you may want to to block on async
task, e.g when you want to call some network API from main
. You have to block (otherwise you return to the OS and the program ends) but you have to run async HTTP request. I'm not sure what solution could be here except hacking main
to allow it to be async as well as we do with Result
main return type, if you cannot call it from non-async main.
Another consideration in favor of current await
is how it works in other popular language (as noted by @fdietze ). It makes it easier to migrate from other language such as C#/TypeScript/JS/Python and thus is a better approach in terms of drumming up new people.
I see several differences here
You should also realize that the main RFC already has async
blocks, with the same semantics as the implicit version, then.
It doesn't sound good, because in my practice you often want to run async from non-async context.
This is not an issue. You can still use async
blocks in non-async
contexts (which is fine because they just evaluate to a F: Future
as always), and you can still spawn or block on futures using exactly the same API as before.
You just can't call async fn
s, but instead wrap the call to them in an async
block- as you do regardless of the context you're in, if you want a F: Future
out of it.
async is just a keyword that allows compiler to rewrite function body, it doesn't affect function interface in any way
Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful because it shows you that the async fn
version will not run any code as part of construction, while the -> impl Future
version may e.g. initiate a request before giving you a F: Future
. It also makes async fn
s more consistent with normal fn
s, in that calling something declared as -> T
will always give you a T
, regardless of whether it's async
.
(You should also note that in Rust there is still quite a leap between async fn
and the Future
-returning version, as described in the RFC. The async fn
version does not mention Future
anywhere in its signature; and the manual version requires impl Trait
, which carries with it some problems to do with lifetimes. This is, in fact, part of the motivation for async fn
to begin with.)
It makes it easier to migrate from other language such as C#/TypeScript/JS/Python
This is an advantage only for the literal await future
syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.
Yes, this is is a legitimate difference between the proposals. It was also covered in the internals thread. Arguably, having different interfaces for the two is useful
I'd say having different interfaces for the two has some advantages, because having API depended on implementation detail doesn't sound good to me. For example, you are writing a contract that is simply delegating a call to internal future
fn foo(&self) -> Future<T> {
self.myService.foo()
}
And then you just want to add some logging
async fn foo(&self) -> T {
let result = await self.myService.foo();
self.logger.log("foo executed with result {}.", result);
result
}
And it becomes a breaking change. Whoa?
This is an advantage only for the literal await future syntax, which is fairly problematic on its own in Rust. Anything else we might end up with also has a mismatch with those languages, while implicit await at least has a) similarities with Kotlin and b) similarities with synchronous, thread-based code.
It's an advantage for any await
syntax, await foo
/foo await
/foo@
/foo.await
/... once you get that it's the same thing, the only difference is that you place it before/after or have a sigil instead of keyword.
You should also note that in Rust there is still quite a leap between async fn and the Future-returning version, as described in the RFC
I know it and it disquiets me a lot.
And it becomes a breaking change.
You can get around that by returning an async
block. Under the implicit await proposal, your example looks like this:
fn foo(&self) -> impl Future<Output = T> { // Note: you never could return `Future<T>`...
async { self.my_service.foo() } // ...and under the proposal you couldn't call `foo` outside of `async` either.
}
And with logging:
fn foo(&self) -> impl Future<Output = T> {
async {
let result = self.my_service.foo();
self.logger.log("foo executed with result {}.", result);
result
}
}
The bigger issue with having this distinction arises during the transition of the ecosystem from manual future implementations and combinators (the only way today) to async/await. But even then the proposal allows you to keep the old interface around and provide a new async one alongside it. C# is full of that pattern, for example.
Well, that sounds reasonable.
However, I do believe such implicitness (we don't see if foo()
here is async or sync function) lead to the same problems that arised in protocols such as COM+ and was a reason for WCF being implemented as it was. People had problems when async remote requests were looking like simple methods calls.
This code looks perfectly fine except I can't see if some request if async or sync. I believe that it's important information. For example:
fn foo(&self) -> impl Future<Output = T> {
async {
let result = self.my_service.foo();
self.logger.log("foo executed with result {}.", result);
let bars: Vec<Bar> = Vec::new();
for i in 0..100 {
bars.push(self.my_other_service.bar(i, result));
}
result
}
}
It's crucial to know if bar
is sync or async function. I often see await
in the loop as a marker that this code have to be changed to achieve better throughout load and performance. This is a code I reviewed yesterday (code is suboptimal, but it's one of review iterations):
As you can see, I easily spotted that we have a looping await here and I asked to change it. When change was committed we got 3x page load speedup. Without await
I could easily overlook this misbehaviour.
I admit I haven't used Kotlin, but last time I looked at that language, it seemed to be mostly a variant of Java with less syntax, up to the point where it was easy to mechanically translate one to the other. I can also imagine why it would be liked in the world of Java (which tends to be a little syntax-heavy), and I'm aware it recently got a boost in popularity specifically due being not Java (the Oracle vs. Google situation).
However, if we decide to take popularity and familiarity into account, we might want to take a look at what JavaScript does, which is also explicit await
.
That said, await
was introduced to mainstream languages by C#, which is maybe one language where usabilty was considered to be of utmost importance. In C#, asynchronous calls are indicated not only by the await
keyword, but also by the Async
suffix of the method calls. The other language feature that shares most with await
, yield return
is also proeminently visible in code.
Why is that? My take on it is that generators and asynchronous calls are too powerful constructs to let them pass unnoticed in code. There's a hierarchy of control flow operators:
Pascal
where there's no difference at call site between a nullary function and a variable)goto
(all right, it's not a strict hierarchy)yield return
tends to stand out)await
+ Async
suffixNotice how they also go from less to more verbose, according to their expressiveness or power.
Of course, other languages took different approaches. Scheme continuations (like in call/cc
, which isn't too different from await
) or macros have no syntax to show what you are calling. For macros, Rust took the approach of making it easy to see them.
So I would argue that having less syntax isn't desirable in itself (there are languages like APL or Perl for that), and that syntax doesn't have to be just boilerplate, and has an important role in readability.
There's also a parallel argument (sorry, I can't remember the source, but it might have come from someone in the language team) that people are more comfortable with noisy syntax for new features when they are new, but then are fine with a less verbose one once they end up to be commonly used.
As for the question of await!(foo)?
vs. await foo?
, I'm in the former camp. You can internalise pretty much any syntax, however we are too used to taking cues from spacing and proximity. With await foo?
there's a lange chance one will second-guess themselves on the precedence of the two operators, while the braces make it clear what's happening. Saving three characters isn't worth it. And as for the practice of chaining await!
s, while it might be a popular idiom in some languages, I feel it has too many downsides like poor readability and interaction with debuggers to be worth optimizing for.
Saving three characters isn't worth it.
In my anecdotal experience, extra characters (e.g. longer names) aren't much of a problem, but extra tokens can be really annoying. In terms of a CPU analogy, a long name is straightline code with good locality - I can just type it out from muscle memory - while the same number of characters when it involves multiple tokens (e.g. punctuation) is branchy and full of cache misses.
(I fully agree that await foo?
would be highly non-obvious and we should avoid it, and that having to type more tokens would be far preferable; my observation is only that not all characters are created equal.)
@rpjohnst I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)
It's crucial to know if
bar
is sync or async function.
I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)
For example, the fact that the call was in a loop is just as, if not more, important than the fact that it was async. And in Rust, where parallelization is so much easier to get right, you could just as well go around suggesting that expensive-looking synchronous loops be switched to Rayon iterators!
So I don't think requiring await
is actually all that important for catching these optimizations. Loops are already always good places to look for optimization, and async fn
s are already a good indicator that you can get some cheap IO concurrency. If you find yourself missing those opportunities, you could even write a Clippy lint for "async call in a loop" that you run occasionally. It would be great to have a lint similar for synchronous code as well!
The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo()
always runs foo
's body to completion. Under this proposal, leaving out an annotation just gives you less-concurrent code, which is how virtually all code already behaves. Under "explicit await," leaving out an annotation introduces accidental concurrency, or at least accidental interleaving, which is problematic.
I think your alternative proposal might have slightly better reception if it were presented as "explicit async" rather than "implicit await" :-)
The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P
I'm not sure this is really any different from knowing whether some function is cheap or expensive, or whether it does IO or not, or whether it touches some global state or not. (This also applies to @lnicola's hierarchy- if async calls run to completion just like sync calls, then they're really no different in terms of power!)
I think this is as important as know that function changes some state, and we alreay have a mut
keyword on both call side an caller side.
The motivation for "explicit async" is not simply "less syntax," as @lnicola implies. It's to make the behavior of function call syntax more consistent, so that foo() always runs foo's body to completion.
One one side it's a good consideration. On the other one you can easily separate future creation and future run. I mean if foo
returns you some abstraction that allows you then to call run
and get some result it doesn't make foo
useless trash that does nothing, it does a very useful thing: it construct some object you can call methods later on. It doesn't make it any different. The foo
method we call is just a blackbox and we see its signature Future<Output=T>
and it actually returns a future. So we explicitly await
it when we want to do so.
The thread is named "explicit future construction, implicit await," but it seems the latter name has stuck. :P
I personally thing that the better alternative is "explicit async explicit await" :)
P.S.
I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took. I'd propose to ask guys from others languages as well but I merely don't know them :) I'm sure they had multiple debates about existing syntax and they could already discuss it a lot and they may have some useful ideas.
It doesn't mean Rust have to have this syntax because C# does, but it just allows to make more weighted decision.
I personally thing that the better alternative is "explicit async explicit await" :)
The main proposal isn't "explicit async," though- that's why I picked the name. It's "implicit async," because you can't tell at a glance where asynchrony is being introduced. Any unannotated function call might be constructing a future without awaiting it, even though Future
appears nowhere in its signature.
For what it's worth, the internals thread does include an "explicit async explicit await" alternative, because that's future-compatible with either main alternative. (See the final section of the first post.)
did you try to communicate with C# LDM?
The author of the main RFC did. The main point that came out of it, as far I remember, was the decision not to include Future
in the signature of async fn
s. In C#, you can replace Task
with other types to have some control over how the function is driven. But in Rust, we don't (and won't) have any such mechanism- all futures will go through a single trait, so there's no need to write that trait out every time.
We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.
We also communicated with the Dart language designers, and that was a large part of my motivation for writing up the "explicit async" proposal. Dart 1 had a problem because functions didn't run to their first await when called (not quite the same as how Rust works, but similar), and that caused such massive confusion that in Dart 2 they changed so functions do run to their first await when called. Rust can't do that for other reasons, but it could run the entire function when called, which would also avoid that confusion.
Great experience, I wasn't aware of it. Nice to hear you've done such a massive work. Well done 👍
I was also hitted by a thought tonight: did you try to communicate with C# LDM? For example, guys like @HaloFour , @gafter or @CyrusNajmabadi . It may be really good idea to ask them why they took syntax they took.
I'm happy to provide any info you're interested in. However, and i'm only skimmed through it. Would it be possible to condense down any specific questions you currently have?
Regarding await
syntax (this might be completely stupid, feel free to shout at me; I am an async programming noob and I have no idea what I am talking about):
Instead of using the word "await", can we not introduce a symbol/operator, similar to ?
. For example, it could be #
or @
or something else that is currently unused.
For example, if it were a postfix operator:
let stuff = func()#?;
let chain = blah1()?.blah2()#.blah3()#?;
It is very concise and reads naturally from left to right: await first (#
), then handle errors (?
). It doesn't have the problem that the postfix await keyword has, where .await
looks like a struct member. #
is clearly an operator.
I am not sure if postfix is the right place for it to be, but it felt that way because of precedence. As prefix:
let stuff = #func()?;
Or heck even:
let stuff = func#()?; // :-D :-D
Has this ever been discussed?
(I realise this kinda starts to approach the "random keyboard mash of symbols" syntax that Perl is infamous for ... :-D )
@rayvector https://github.com/rust-lang/rust/issues/50547#issuecomment-388108875 , 5th alternative.
@CyrusNajmabadi thank you for coming. The main question is what option from listed ones you think fits better the current Rust language as it is, or maybe there is some other alternative? This topic isn't really long so you can easily scroll it top down quickly. The main question: should Rust follow current C#/TS/... await
way or maybe it should implement its own. Is current syntax some kind of "legacy" that you would like to change in some way or it fits C# the best and it's the best option for newcoming languages as well?
The main consideration against C# syntax is operator precedence await foo?
should await first and then evaluate ?
operator as well as difference that unlike C# execution doesn't run in caller thread until first await
, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator
is called first time:
IEnumerable<int> GetInts(int n)
{
if (n < 0)
throw new InvalidArgumentException(nameof(n));
for (int i = 0; i <= n; i++)
yield return i;
}
More detailed in my first comment and later discussion.
@Pzixel Oh, I guess I missed that one when I was skimming through this thread earlier ...
In any case, I haven't seen much discussion about this, other than that brief mention.
Are there any good arguments for/against?
@rayvector I argued a little here in favour of more verbose syntax. One of the reasons is the one that you mention:
the "random keyboard mash of symbols" syntax that Perl is infamous for
To clarify, I don't think await!(f)?
is really in the running for the final syntax, it was chosen specifically because its a solid way of not committing to any particular choice. Here are syntaxes (including the ?
operator) that I think are still "in the running":
await f?
await? f
await { f }?
await(f)?
(await f)?
f.await?
Or possibly some combination of these. The point is that several of them do contain braces to be clearer about precedence & there are a lot of options here - but the intention is that await
will be a keyword operator, not a macro, in the final version (barring some major change like what rpjohnst has proposed).
I vote for either a simple postfix await operator (e.g. ~
) or the keyword with no parens and highest precedence.
I've been reading through this thread, and I would like to propose the following:
await f?
evaluates the ?
operator first, and then awaits the resultant future.(await f)?
awaits the future first, and then evaluates the ?
operator against the result (due to ordinary Rust operator precedence)await? f
is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.I agree with other commenters that await
should be explicit. It's pretty painless doing this in JavaScript, and I really appreciate the explicitness and readability of Rust code, and I feel like making async implicit would ruin this for async code.
It occured to me that "implicit async block" ought to be implementable as a proc_macro, which simply inserts an await
keyword before any future.
The main question is what option from listed ones you think fits better the current Rust language as it is,
Asking a C# designer what best fits the rust language is... interesting :)
I don't feel qualified to make such a determination. I like rust and dabble with it. But it's not a language i'm using day in and day out. Nor have I deeply ingrained it in my psyche. As such, i don't think i'm qualified to to make any claims about what are the appropriate choices for this language here. Want to ask me about Go/TypeScript/C#/VB/C++. Sure, i'd feel much more comfortable. But rust is too much out of my realm of expertise to feel comfortable with any such thoughts.
The main consideration against C# syntax is operator precedence
await foo?
This is something i do feel like i can comment on. We thought about precedence a lot with 'await' and we tried out many forms before setting on the form we wanted. One of the core things we found was that for us, and the customers (internal and external) that wanted to use this feature, it was rarely the case that people really wanted to 'chain' anything past their async call. In other words, people seemed to strongly gravitate toward 'await' being the most important part of any full-expression, and thus having it be near the top. Note: by 'full expression' i mean things like the expression you get at the top of a expression-statement, or hte expression on the right of a top level assign, or the expression you pass as an 'argument' to something.
The tendency for people to want to 'continue on' with the 'await' inside an expr was rare. We do occasionally see things like (await expr).M()
, but those seem less common and less desirable than the amount of people doing await expr.M()
.
This is also why we didn't go with any 'implicit' form for 'await'. In practice it was something people wanted to think very clearly about, and which they wanted front-and-center in their code so they could pay attention to it. Interestingly enough, even years later, this tendency has remained. i.e. sometimes we regret many years later that something is excessively verbose. Some features are good in that way early on, but once people are comfortable with it, are better suited with something terser. That has not been the case with 'await'. People still seem to really like the heavy-weight nature of that keyword and the precedence we picked.
So far, we've been very happy with the precedence choice for our audience. We might, in the future, make some changes here. But overall there is no strong pressure to do so.
--
as well as difference that unlike C# execution doesn't run in caller thread until first await, but doesn't start at all, the samy way current code snippet doesn't run negativity checks until GetEnumerator is called first time:
IMO, the way we did enumerators was somewhat of a mistake and has led to a bunch of confusion over the years. It's been especially bad because of the propensity for a lot of code to have to be written like this:
void SomeEnumerator(X args)
{
// Validate Args, do synchronous work.
return SomeEnumeratorImpl(args);
}
void SomeEnumeratorImpl(X args)
{
// ...
yield
// ...
}
People have to write this all the time because of the unexpected behavior that the iterator pattern has. I think we were worried about expensive work happening initially. However, in practice, that doesn't seem to happen, and people def think about the work as happening when the call happens, and the yields themselves happening when you actually finally start streaming the elements.
Linq (which is the poster child for this feature) needs to do this everywhere, this highly diminishing this choice.
For await
i think things are much better. We use 'async/await' a ton ourselves, and i don't think i've ever once said "man... i wish that it wasn't running the code synchronously up to the first 'await'". It simply makes sense given what the feature is. The feature is literally "run the code up to await points, then 'yield', then resume once the work you're yielding on completes". it would be super weird to not have these semantics to me since it is precisely the 'awaits' that are dictating flow, so why would anything be different prior to hitting the first await.
Also... how do things then work if you have something like this:
async Task FooAsync()
{
if (cond)
{
// only await in method
await ...
}
}
You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?
await? f is available as syntactic sugar for `(await f)?. I believe "future returning a result" will be a super common case, so a dedicated syntax makes a lot of sense.
This resonates the most with me. It allows 'await' to be the topmost concept, but also allows simple handling of Result types.
One thing we know from C# is that people's intuition around precedence is tied to whitespace. So if you have "await x?" then it immediately feels like await
has less precedence than ?
because the ?
abuts the expression. If the above actually parsed as (await x)?
that would be surprising to our audience.
Parsing it as await (x?)
would feel the most natural just from the syntax, and would fit the need of getting a 'Result' of a future/task back, and wanting to 'await' that if you actually received a value. If that then returned a Result back itself, it feels appropraite to have that combined with the 'await' to signal that it happens afterwards. so await? x?
each ?
binds tightly to the portion of the code it most naturally relates to. The first ?
relates to the await
(and specifically the result of it), and the second relates to the x
.
if "execution doesn't run in caller thread until first await" what actually happens here?
Nothing happens until the caller awaits the return value of FooAsync
, at which point FooAsync
's body runs until either an await
or it returns.
It works this way because Rust Future
s are poll-driven, stack-allocated, and immovable after the first call to poll
. The caller must have a chance to move them into place--on the heap for top-level Future
s, or else by-value inside a parent Future
, often on the "stack frame" of a calling async fn
--before any code is executed.
This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. }
blocks for when you do need concurrent execution).
I kind of favor the latter, because it avoids the problem you mention with C# generators, and also avoids the operator precedence question entirely.
@CyrusNajmabadi In Rust, Future
usually does no work until it is spawned as a Task
(it's much more similar to F# Async
):
let bar = foo();
In this case foo()
returns a Future
, but it probably doesn't actually do anything. You have to manually spawn it (which is also similar to F# Async
):
tokio::run(bar);
When it is spawned, it will then run the Future
. Since this is the default behavior of Future
, it would be more consistent for async/await in Rust to not run any code until it is spawned.
Obviously the situation is different in C#, because in C# when you call foo()
it immediately starts running the Task
, so it makes sense in C# to run code until the first await
.
Also... how do things then work if you have something like this [...] You can totally call this method and never hit an await. if "execution doesn't run in caller thread until first await" what actually happens here?
If you call FooAsync()
then it does nothing, no code is run. Then when you spawn it, it will run the code synchronously, the await
will never run, and so it immediately returns ()
(which is Rust's version of void
)
In other words, it's not "execution doesn't run in caller thread until first await", it's "execution doesn't run until it is explicitly spawned (such as with tokio::run
)"
Nothing happens until the caller awaits the return value of FooAsync, at which point FooAsync's body runs until either an await or it returns.
Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).
This means we're stuck with either a) C# generator-like semantics, where no code runs at invocation, or b) Kotlin coroutine-like semantics, where calling the function also immediately and implicitly awaits it (with closure-like async { .. } blocks for when you do need concurrent execution).
Given these, i'd far prefer the former than the latter. Just my personal pref though. If the kotlin approach feels more natural for your domain, then go for that!
@CyrusNajmabadi Ick. That seems unfortunate. There are many times i may not ever get around to awaiting something (often due to cancellation and composition with tasks). As a dev i'd still appreciate getting early errors those (which is one of hte most common reasons people want execution to run up to the await).
I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await
. In that case the Promise
will still run, but the errors will be swallowed (or other weird stuff happens).
With the Rust/Haskell/F# style, either the Future
runs (with correct error handling), or it doesn't run at all. Then you notice that it isn't running, so you investigate and fix it. I believe this results in more robust code.
@Pauan @rpjohnst Thanks for the explanations. Those were approaches we considered as well. But it turned out to not actually be that desirable in practice.
In the cases where you didn't want it to "actually do anything. You have to manually spawn it", we found it cleaner to model that as returning something that generated tasks on demand. i.e. something as simple as Func<Task>
.
I feel the exact opposite. In my experience with JavaScript it is very common to forget to use await.
C# does work to try to ensure that you either awaited, or otherwise used the task sensibly.
but the errors will be swallowed
That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task. This is hte same with iterators. I'd much rather know i was creating it incorrect at the point in time whne i call the function versus potentially much further down the line if/when the iterator is streamed.
Then you notice that it isn't running, so you investigate and fix it.
In the scenarios i'm talking aobut, "not running" is completely reasonable. After all, my application may decide at any point that it doesn't need to actually run the task. That's not the bug that i'm describing. The bug i'm describing is that i didn't pass validation, and i want to find out about that as close to the point where i logically created the work as opposed to the point when the work actually needs to run. Given that these are models to describe async processing, it's often goign to be hte case that these are far away from each other. So having the information about issues happen as early as possible is valuable.
As mentioned, this is not hypothetical either. A similar thing happens with streams/iterators. People often create them, but then don't realize them until later. It's been an extra burden for people to have to track these things back to their source. This is why so many APIs (including hte BCL) now have to do the split between the synchronous/early work, and the actual deferred/lazy work.
That's the opposite of what i'm saying. I'm saying i want the code to execute eagerly so that errors are things i hit immediately, even in the event that i don't ever end up getting around to executing the code in the task.
I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future
"?
The way that Future
s work in Rust is that you compose Future
s together in various ways (including async/await, including parallel combinators, etc.), and by doing this it builds up a single fused Future
which contains all the sub-Future
s. And then at the top-level of your program (main
) you then use tokio::run
(or similar) to spawn it.
Aside from that single tokio::run
call in main
, you usually won't be spawning Future
s manually, instead you just compose them. And the composition naturally handles spawning/error handling/cancellation/etc. correctly.
i also want to make somethign clear. When i say something like:
But it turned out to not actually be that desirable in practice.
I'm talking very specifically about things with our language/platform. I can only give insight into the decisions that made sense for C#/.Net/CoreFx etc. It may be completely the case that your situation is different and what you want to optimize for and the types of approaches you should take go in an entirely different direction.
I can understand the desire for early errors, but I'm confused: under what situation would you ever "end up not getting around to spawning the Future"?
All the time :)
Consider how Roslyn (the C#/VB compiler/IDE codebase) is itself written. It is heavily async and interactive. i.e. the primary use case for it is to be used in a shared fashion with many clients accessing it. Cliest services are common interacting with the user over a wealth of features, many of which many decide that they no longer need to do work they originally thought was important, due to the user doing any number of actions. For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.
For example, as the user is typing, we're doing tons of task compositions and manipulations, and we may end up deciding to not even get around to executing them because another event came in a few ms later.
Isn't that just handled by cancellation, though?
This is the tracking issue for RFC 2394 (rust-lang/rfcs#2394), which adds async and await syntax to the language.
I will be spearheading the implementation work of this RFC, but would appreciate mentorship as I have relatively little experience working in rustc.
TODO:
Unresolved questions:
await
.Try
implementations to which we want to commit