Open roryabraham opened 3 weeks ago
cc @gedu
Hey, I'm Edu from Callstack, I can take this one
There are several obstacles to achieve this.
The obstacle here is that if the request fails after replace call, it will show wrong comment.
This is tricky as we are now reducing the queue size.
The obstacle here is that if the request fails after replace call, it will show wrong comment.
isn't that the case today with any EditComment
request? Or is there something I'm missing?
Going on parental leave. @gedu @mountiny @shubham1206agra I leave this in your capable hands. My parting note is to please use as many automated tests to cover this change as possible ππΌ
@shubham1206agra I'm making me a diagram, and I was thinking, what about a reaction, deleteComment
has to delete addEmojiReaction
too. Because now if you edit -> edit -> add reaction -> delete, you will see a request for each in that order. When we should see only a delete request, right?
Yes that is true
@shubham1206agra also just to make sure, if I edit -> create a thread -> add message in that thread -> delete original message, this is the final state. In such case I only delete the editComment
the thread should be created and any message in it, right?
This is how it looks now
Yes
There's another case Add comment -> Add comment in thread -> Delete parent comment -> Delete thread comment
No request should be made in this case
There's another case Add comment -> Add comment in thread -> Delete parent comment -> Delete thread comment
No request should be made in this case
Ohh, yeah, so nothing happens, no new thread is created and no new message is added into that thread. Cool, I'm adding this case too
Just to make sure, these cases can happen at any time, not just offline. Given that the queue is being processed sequentially, the delete could occur at any time, and I have to prevent those messages from being sent, right?
@shubham1206agra Also I was thinking, if I create a thread -> message -> message -> delete last message -> delete thread. I'm not sure if I found a bug because my chat breaks. I lose my Task, and I should have two [Deleted Message] messages.
https://github.com/user-attachments/assets/06c219fa-24a8-4a30-b283-60c87267e647
Can you report the bug in slack?
Can you report the bug in slack?
How can I do it? I don't have access to the prompt /bug
You can just copy the template and fill it manually
@gedu are you able to start a PR for this one? anyone from callstack team able to help?
Yes, I'm working on this, started already with the Delete comments
what should be the best approach to this case:
Offline -> add message -> update -> delete message (now what happens is the message is strikethrough) because when is
back online after the response from the server it is removed from Onyx.
But now because I won't do any request to the server I need to clean it, I can just use the successData
but when?
1) as soon as the user deletes the message? 2) Should I wait until the user is back online (this is just for the user to see the message strikethrough)
Note: this is only
when a message is created while the user is offline, if the message already exist I need to send the DeleteMessage request
personally I vote for 1
@mountiny @shubham1206agra
I'm removing the message also from Onyx, and I'm writing the test, still working on the case:
Offline -> add message -> update -> delete message
because it is the more complex one
Yeah I think 1 makes sense. I think all of these flows will have to have thorough unit tests. This is super easy to mess up so we need to cover it up with automation.
The code is working. I'm adding tests for more cases.
when adding a comment with attachment, just attachment, with emoji reactions (adding and removing) Also testing those cases when the deleteComment has to be sent to backend and when no.
I'm handling the case where a comment is used to create a thread and then gets deleted. In this situation, I'm not removing it from the queue because I don't know if there will be more messages or deletions. Additionally, it seems that an OpenReport must be done to create the thread, and I can't delete the comment because it will fail. Now I will be working on the tests
@mountiny I'm still working on the DeleteComment, and I'm asking @zirgulis to help me with EditComment since it can be solved in parallel.
hi @gedu @mountiny, sure I will take the EditComment part, will post my update here tomorrow.
@mountiny @shubham1206agra created a PR for the DeleteComment, so we can start reviewing it. https://github.com/Expensify/App/pull/50919
Triggered auto assignment to @bfitzexpensify (Bug
), see https://stackoverflow.com/c/expensify/questions/14418 for more details. Please add this bug to a GH project, as outlined in the SO.
if you serialize both an AddComment and any number of EditComment requests, the result would be the same as one AddComment with the content of the final EditComment
@mountiny After a talk with @zirgulis that point we will tackle in a new PR, not in the same one of multiple EditComment
, given that it will require more logic and a new set of tests. That way we can track it better if any issue shows up
if you are offline and you serialize multiple EditComment requests, only the last one will take effect
@mountiny @gedu for this I have the code ready here, will create a PR tomorrow since I still need to test on emulators and do the screen recordings.
@mountiny @gedu Multiple EditComment requests deduping PR is open https://github.com/Expensify/App/pull/51149
Thanks!
Job added to Upwork: https://www.upwork.com/jobs/~021848400026446902359
Triggered auto assignment to Contributor-plus team member for initial proposal review - @dukenv0307 (External
)
π£ @dukenv0307 π An offer has been automatically sent to your Upwork account for the Reviewer role π Thanks for contributing to the Expensify app!
π£ @c3024 π An offer has been automatically sent to your Upwork account for the Contributor role π Thanks for contributing to the Expensify app!
Offer link Upwork job Please accept the offer and leave a comment on the Github issue letting us know when we can expect a PR to be ready for review π§βπ» Keep in mind: Code of Conduct | Contributing π
if you serialize both an AddComment and any number of EditComment requests, the result would be the same as one > AddComment with the content of the final EditComment
@mountiny I did the second item, and I'm seeing this behaviour, for a couple of seconds, until the response is back and then it rerenders, you can see the edit
label next to the message. Is it ok?
Given that the optimisticData
is applied before the request reaches the SequentialQueue, it is showing, I can try to add a rollback case for it, but want to confirm if we want or not to show the edit
label. The rollback will be applied as soon as the "conflict" is resolved so probably the user won't see the edit
label while being of line, will be like he just updated the comment.
https://github.com/user-attachments/assets/7147fa98-a2e5-4cf3-98cb-e724b03f1768
@gedu I think from UX it would be best not to show it but I would not block on this if its too complex to achieve not showing edited label
if you serialize both an AddComment and any number of EditComment requests, the result would be the same as one > > AddComment with the content of the final EditComment
Created a new PR: https://github.com/Expensify/App/pull/51422 to add the last point, it depends on https://github.com/Expensify/App/pull/50919, so would be good to merge that one first
Problem
When we make unnecessary network requests, it slows down the app and contributes to higher traffic on our servers. In particular:
EditComment
requests, only the last one will take effectAddComment
and any number ofEditComment
requests, the result would be the same as oneAddComment
with the content of the finalEditComment
AddComment
followed by aDeleteComment
, the end result is the same as not sending any network requests at allEditComment
request followed by aDeleteComment
, the end result is the same as only sending theDeleteComment
Solution
De-dupe the network requests!
Upwork Automation - Do Not Edit
Issue Owner
Current Issue Owner: @dukenv0307