livekit / livekit-cli

Command line interface to LiveKit
https://docs.livekit.io
Other
213 stars 65 forks source link

Load tester: latency is not being reported anymore #349

Open avivace opened 2 months ago

avivace commented 2 months ago

For some reason, mentions of latency were removed from the code and the load-test is not reporting it anymore (despite the examples in the documentation).

This removal was also never mentioned in any of the changelogs I could find here on the github releases of this repository.

A release I could find that still has it is 0.6.0: https://github.com/livekit/livekit-cli/blob/7acc22982fc6cd26da529521820279fb5d3cb5c6/cmd/livekit-load-tester/main.go#L253

Any information on this?

rektdeckard commented 2 months ago

It's unclear why latency reporting was removed, but we can look into restoring this functionality 👍

avivace commented 2 months ago

thanks a lot @rektdeckard for taking a look at this!

davidzhao commented 2 months ago

I think the previous method of reporting latency was a bit hacky (encoding publishing time in the payload). In order to do this correctly, we should look at the sender reports

avivace commented 2 months ago

I think the previous method of reporting latency was a bit hacky (encoding publishing time in the payload). In order to do this correctly, we should look at the sender reports

Hi @davidzhao , if you're not already working on this internally, could you point me to how was it done before and expand on how should it be done now? I could try to take a look and send a draft PR

rektdeckard commented 2 months ago

@avivace We're exploring what accurate performance tracing would entail, but suffice it to say that it touches several components. Anything quick would likely be inaccurate (the previous metrics included local processing times and were a simple averaging of all tracks).

Can you tell us a bit more about your use case here? Are you checking coarse e2e latency just as a smoke test, or are you relying on it more concretely? What other metrics would you like to see in an idea case?

avivace commented 1 week ago

@avivace We're exploring what accurate performance tracing would entail, but suffice it to say that it touches several components. Anything quick would likely be inaccurate (the previous metrics included local processing times and were a simple averaging of all tracks).

Can you tell us a bit more about your use case here? Are you checking coarse e2e latency just as a smoke test, or are you relying on it more concretely? What other metrics would you like to see in an idea case?

@rektdeckard thanks a lot for looking into this. To be honest, I'd say that at the moment we want this metric for e.g. a measure of the infrastructure health/status, but we may also want to rely on it for specific setups in which we may want to take an action if latency becomes bigger than a threshold (e.g. unsubscribing/muting when people are on the same track but also physically in the same room)