Closed akhenry closed 5 months ago
Verified -- Testathon 3/14/24 🥧
All the above scenarios passed with flying colors on a very complex display.
verified testing instructions
I did observe in two windows viewing the same view, the time conductor clock was not ticking on exactly the same timestamps. Maybe a result of the rendering time on each window.
@davetsay Good catch, thank you. I hadn't anticipated this problem. Basically what's happening is that the two windows currently maintain independent workers and subscriptions to Yamcs, so they are both updating the screen on slightly different cycles. While this was always the case, the new 1Hz batching exacerbates the phenomenon and makes it visible to the user, however briefly.
It might be possible to fix this with some effort, but it's probably not trivial. I'd like to treat this as an enhancement, and get some user feedback before we do anything.
verified.
@akhenry , One observation in this step Confirm that Open MCT telemetry resumes after loss of connectivity
. I did get the "Telemetry dropped due to client rate limiting" message on reconnection.
Summary
The recent implementation of client rate limiting is being triggered fairly consistently on first load in all of our deployment environments. My suspicion is that the WebSocket and remote clock subscriptions happen early enough that UI loading blocks for > 1s causing a buffer overflow and client rate limiting.
Although the warning is somewhat innocuous, it is firing so regularly that users will learn to ignore it and miss genuine rate limiting events.
Expected vs Current Behavior
The Client Rate Limiting warning notification should only appear when the client is under unexpected CPU load, and not during normal application initialization.
Impact Check List
Steps to Reproduce
Environment
Additional Information