Open uri-weisman opened 8 months ago
Pinging @elastic/kibana-cloud-security-posture (Team:Cloud Security)
@JordanSh please reserve this task as a first good issue for Kirill
Can you please discuss what is the proposed solution here? do we wish to improve the UX by adding an update indicator?
@kfirpeled I think this is a great opportunity to introduce optimistic updates to this table. (which I happen to already create a workshop in which it is one of the subjects).
optimistic updates are essentially changing the UI before the network request is complete, and after the network request is complete re-render the page with the actual data. of course if the request failed we will rollback to previous state and show an error toasters. ideally we want to do all this while also converting the call to use useMutation
As a side note @kfirpeled, I'm not sure if this can be counted as "first good issue" since it does involve a higher complexity solution and performance in general is a more advanced topic. I'm sure Kirill will be able to do this task, I just dont think the good first issue
tag is appropriate here.
In addition, we should also make sure this stutter has nothing to do with react performance and is actually only a network request taking time. if there are performance issues in react, like for example changing one switch is refetching and rerendeing the entire table and its data. this will require a different fix in addition to the optimistic updates.
I am blocking the bug until the Cloud Security Posture team decides to move forward with refactoring this page the use a State-Context pattern as outlined in this proposal.
Using the proposed pattern should allow us to use some optimistic rendering so that we can have the interactive components be more performant as well as centralize the business rules and the state management of page.
Hi @seanrathier please see my comment
@kfirpeled, regarding your comment, we went ahead with the optimistic updating by using ReactQuery useMutation
hook.
After discovering that we no longer need the await for the invalidationQueries and some other fetches, I reverted to the main
branch and removed the await
(s) that could be executed in the "background" but did not see a change in the performance.
Optimistic updates are needed to fix this performance issue, however, organizing the code for this page so that we can consolidate the business rules and HTTP requests in a Context-State pattern would be better for readability, but more so, to make it easier to remove unnecessary performance helpers and awaits that are stopping the code execution.
Also, answered in you message here
Thanks, Can you explain why? Is the performance hit within our API or somewhere else?
Optimistic updates are needed to fix this performance issue
It is a cosmetic change, not a fix ;) Asking out of curiosity, if there's an issue with the API I'd like to make a task in our backlog to fix that
@kfirpeled , curiosity is good ;)
Yes, there is an issue with the API as @JordanSh pointed out here, and this should be investigated further. I did not see this initially when I was investigating this issue LOMM.
Using optimistic updates via the useMutation hook is not a band-aid solution, this is fairly common with applications using the ReactQuery library.
Not sure why it should take ~1s for a bulk update. Do you mind opening a ticket for that? I wouldn't expect it to take more than 0.5s.
@kfirpeled Here is the new issue
I was testing the behavior in serverless as a part of 8.15 BC3 QA cycle, and here is how it looked for me
https://github.com/user-attachments/assets/e71a7761-1a84-4ff2-9eb3-400a278d0709
As I don't see a big difference with the video attached to the original issue, I think it makes sense to reopen the issue and do another attempt of fixing it. Wdyt @seanrathier @kfirpeled ?
Reopening as I think we also need to verify it for ESS, for me the experience was very similar to the one in serverless
Kibana version: Version:
8.13.0
Build:71738
Commit Hash:b036a9705a55f6c81d065011ad8c991cbc3101d9
Describe the bug: Disabling / Enabling one of the rules takes around 3 seconds, it feels like it should be faster.Steps to reproduce:
Expected behavior: The operation shouldn't feel delayed to the user.
Screen record (if relevant):
https://github.com/elastic/kibana/assets/68195305/47c8b587-a166-4710-8dec-46fa18a0dca6