Closed thomaseizinger closed 7 months ago
The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next
handle_timeout
to theIceAgent
.
I could also solve this in my code by calling handle_timeout
. This way though, all users of str0m benefit from this improvement.
The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next
handle_timeout
to theIceAgent
.I could also solve this in my code by calling
handle_timeout
. This way though, all users of str0m benefit from this improvement.
Actually, I don't think I can because handle_timeout
has a guard to only work in increments of 50ms and I might be adding candidates in smaller window than that.
Interesting!
In the rest of str0m the order of actions is always:
poll_timeout
handle_timeout
Depending on the change in 1, the timeout in 2 might be 0
to schedule immediate sending of things in 3. Notice that it's expected the library user doesn't delay step 2-3 if they made changes to the session.
Looking at the agent.rs
, I wonder if we can follow the same pattern? poll_timeout
gives us next_binding_attempt
for every pair, and it seems like we should be able to make this have no timeout for new pairs.
In the rest of str0m the order of actions is always:
1. Make changes to session. 2. `poll_timeout` 3. `handle_timeout`
Depending on the change in 1, the timeout in 2 might be
0
to schedule immediate sending of things in 3. Notice that it's expected the library user doesn't delay step 2-3 if they made changes to the session.
I guess that is what we are currently doing "wrong". We create a sleep_until
future upon poll_timeout
and don't call handle_timeout
earlier than that.
I guess that could be change to reset this future any time state changes within our node. That is probably the more correct approach to handle these timeouts.
I'll experiment with that.
Yeah, and the final tweak would be to lower that 50 to 0 in for this case.
I guess this is generally a downside of SANS-IO :(
There are plenty of opportunities to use the API the "wrong" way.
Definitely!
I'm speculating that there might be a pattern or something that could be "discovered" to unlock that. If only 10% of the effort spent on perfecting async in Rust was spent on it, we probably solve it already :)
Definitely!
I'm speculating that there might be a pattern or something that could be "discovered" to unlock that. If only 10% of the effort spent on perfecting async in Rust was spent on it, we probably solve it already :)
Instead a separate poll_timeout
, it might be more intuitive to return it as part of poll_event
, like Event::WakeIn
.
I've found a combination of https://github.com/firezone/firezone/pull/4022 and https://github.com/algesten/str0m/pull/477 to achieve the same effect which seems like a cleaner solution.
In my testing, I've found this to improve connection setup latency by about 0.7 seconds. There might be a better way of solving this but I wanted to open a PR to start the discussion.
The reason this makes things faster is because we would otherwise only queue the STUN requests upon the next
handle_timeout
to theIceAgent
.