Expensify / App

Welcome to New Expensify: a complete re-imagination of financial collaboration, centered around chat. Help us build the next generation of Expensify by sharing feedback and contributing to the code.
https://new.expensify.com
MIT License
3.51k stars 2.86k forks source link

[$250] Chat - Group is created with duplicate user when chat is started quickly on a slow network #49931

Open lanitochka17 opened 1 month ago

lanitochka17 commented 1 month ago

If you haven’t already, check out our contributing guidelines for onboarding and email contributors@expensify.com to request to join our Slack channel!


Version Number: 9.0.41-2 Reproducible in staging?: Y Reproducible in production?: Y If this was caught during regression testing, add the test name, ID and link from TestRail: N/A Email or phone of affected tester (no customers): biruknew45+1195@gmail.com Issue reported by: Applause - Internal Team

Action Performed:

  1. Go to https://staging.new.expensify.com/.
  2. Open the Developer Tools and go to the Network tab, then set the network speed to 3G
  3. Click on the FAB > Start Chat
  4. Enter a user's email, then quickly click on it before the fallback avatar changes to the correct user avatar
  5. Send a message

Expected Result:

The chat with the selected user should remain a 1-on-1 chat. The chat should not change into a group with duplicate users

Actual Result:

After sending the message, the chat turns into a group with duplicate users. It only corrects itself back to a single chat after switching to another chat and then returning to it

Workaround:

Unknown

Platforms:

Which of our officially supported platforms is this issue occurring on?

Screenshots/Videos

Add any screenshot/video evidence

https://github.com/user-attachments/assets/7ed99952-55d7-44f6-a72f-26f59e044154

View all open jobs on GitHub

Upwork Automation - Do Not Edit
  • Upwork Job URL: https://www.upwork.com/jobs/~021843049310841478139
  • Upwork Job ID: 1843049310841478139
  • Last Price Increase: 2024-10-13
  • Automatic offers:
    • DylanDylann | Contributor | 104420901
    • nkdengineer | Contributor | 104651259
Issue OwnerCurrent Issue Owner: @nkdengineer
melvin-bot[bot] commented 1 month ago

Triggered auto assignment to @RachCHopkins (Bug), see https://stackoverflow.com/c/expensify/questions/14418 for more details. Please add this bug to a GH project, as outlined in the SO.

RachCHopkins commented 1 month ago

I don't understand what's happening in that video. I can't replicate this.

image

I'm on 2 bars of 4G.

Is the person reporting it saying that the two users other than themselves are, in fact, the same user?

RachCHopkins commented 1 month ago

Please clarify the "actual result"

RachCHopkins commented 4 weeks ago

Waiting on a response/clarification from QA team.

mvtglobally commented 3 weeks ago

team is still able to reproduce

https://github.com/user-attachments/assets/44e432c6-24e5-4553-abba-0aab81c6dcb9

RachCHopkins commented 3 weeks ago

Ok, to sum up, it can be reproduced and we don't know what/who the third user is.

melvin-bot[bot] commented 3 weeks ago

Job added to Upwork: https://www.upwork.com/jobs/~021843049310841478139

melvin-bot[bot] commented 3 weeks ago

Triggered auto assignment to Contributor-plus team member for initial proposal review - @parasharrajat (External)

nkdengineer commented 3 weeks ago

Edited by proposal-police: This proposal was edited at 2024-10-07T12:12:00Z.

Proposal

Please re-state the problem that we are trying to solve in this issue.

After sending the message, the chat turns into a group with duplicate users. It only corrects itself back to a single chat after switching to another chat and then returning to it

What is the root cause of that problem?

When OpenReport API is called, the success data is added to queuedOnyxUpdates and waited for all write requests to be complete before merging it to Onyx

https://github.com/Expensify/App/blob/04214cdb3e35816fd28a640f1b3c1b23bd7407b0/src/libs/actions/QueuedOnyxUpdates.ts#L11-L15

We've had the logic to flush the queue after all requests are complete here

https://github.com/Expensify/App/blob/04214cdb3e35816fd28a640f1b3c1b23bd7407b0/src/libs/Network/SequentialQueue.ts#L172-L174

But in this case after AddComment API is called, the queue is paused

https://github.com/Expensify/App/blob/04214cdb3e35816fd28a640f1b3c1b23bd7407b0/src/libs/actions/OnyxUpdates.ts#L147-L153

After GetMissingOnyxMessages API, the queue is active again but the queue is empty then the queuedOnyxUpdates isn't flush which causes the participant of the DM to have both optimistic and current participants of a user.

What changes do you think we should make in order to solve the problem?

I notice that we should only flush the queuedOnyxUpdates if only numberOfPersistedRequests is 0 because if we merge these data to Onyx when there are some pending requests in the queue that can cause the race condition since other requests can update the data for the same key of the previous request.

if (numberOfPersistedRequests === 0) {
    flushOnyxUpdatesQueue();
}

https://github.com/Expensify/App/blob/04214cdb3e35816fd28a640f1b3c1b23bd7407b0/src/libs/Network/SequentialQueue.ts#L192

c3024 commented 3 weeks ago

Proposal

Please re-state the problem that we are trying to solve in this issue.

On slow networks, if we create a report and comment on it, a group with a duplicate chat is created.

What is the root cause of that problem?

For write requests, we do not write the response onyxData and the corresponding successData/failureData/finallyData to Onyx directly. Instead, we write them to queuedOnyxUpdates to prevent the replay effect. https://github.com/Expensify/App/blob/78e484f41e5fb5fa3f8827d2b4aea0ba55b581e4/src/libs/actions/OnyxUpdates.ts#L30-L32 These updates are flushed only after all the PersistedRequests are cleared. https://github.com/Expensify/App/blob/78e484f41e5fb5fa3f8827d2b4aea0ba55b581e4/src/libs/Network/SequentialQueue.ts#L173-L175 We remove a request from persistent requests only after the response is received and processed with all middlewares. https://github.com/Expensify/App/blob/78e484f41e5fb5fa3f8827d2b4aea0ba55b581e4/src/libs/Network/SequentialQueue.ts#L90-L99 So, on slow networks, when we call OpenReport, its response and corresponding request’s successData, etc., are added to queuedOnyxUpdates. This cannot be flushed immediately because there was already an AddComment request added to the PersistedRequests. For some reason that needs to be fixed, the backend sends a previousUpdateID for AddComment that is larger than the previous lastUpdateID/lastClientUpdateID. This makes doesClientNeedToBeUpdated true. https://github.com/Expensify/App/blob/78e484f41e5fb5fa3f8827d2b4aea0ba55b581e4/src/libs/Middleware/SaveResponseInOnyx.ts#L34 As a result, the AddComment response is not processed directly with OnyxUpdates.apply. It sets this AddComment response to saveUpdateInformation, and the queue is paused. When the queue is paused, the updates waiting in queuedUpdates are not flushed, so OpenReport’s successData waits in the queuedUpdates, and the optimistic user detail is not removed, making the report appear like a group report.

What changes do you think we should make in order to solve the problem?

  1. First, it needs to be checked why the previousUpdateID for the AddComment response is sent as a higher value than the OpenReport call’s lastUpdatedID/lastClientUpdateID. Fixing this issue should resolve the problem.
  2. However, even with an accidental call to GetMissingOnyxMessages, the previous updates should not be left in limbo without being updated to Onyx until a new write request is made. The updates in the queue need to be written to Onyx after all the missing updates are obtained with GetMissingOnyxMessages. Currently, this is not handled anywhere. When the next write request gets completed and the persistent requests array is empty, all queuedOnyxMessages get updated. We can see this in the present bug as well: if we send another message, everything updates correctly (the backend also does not send an incorrect previousUpdateID) because the queue is unpaused. After processing this request, when the persisted requests are empty, the queuedOnyxUpdates (which include the previous OpenReport and AddComment responses’ onyxData, successData, etc.) are flushed. To achieve this, after fetching the missing messages, we should flush the queue finally by changing this: https://github.com/Expensify/App/blob/78e484f41e5fb5fa3f8827d2b4aea0ba55b581e4/src/libs/actions/OnyxUpdateManager/index.ts#L144 to:
    DeferredOnyxUpdates.getMissingOnyxUpdatesQueryPromise()?.finally(finalizeUpdatesAndResumeQueue).then(() => QueuedOnyxUpdates.flushQueue());

    What alternative solutions did you explore? (Optional)

melvin-bot[bot] commented 3 weeks ago

@parasharrajat, @RachCHopkins Whoops! This issue is 2 days overdue. Let's get this updated quick!

RachCHopkins commented 3 weeks ago

@parasharrajat do you like any of the proposals here?

melvin-bot[bot] commented 2 weeks ago

📣 It's been a week! Do we have any satisfactory proposals yet? Do we need to adjust the bounty for this issue? 💸

parasharrajat commented 2 weeks ago

@RachCHopkins I won't be available from 16 Oct for a few days, please reassign.

DylanDylann commented 2 weeks ago

I can take over this issue as C+

melvin-bot[bot] commented 2 weeks ago

@parasharrajat @RachCHopkins this issue was created 2 weeks ago. Are we close to approving a proposal? If not, what's blocking us from getting this issue assigned? Don't hesitate to create a thread in #expensify-open-source to align faster in real time. Thanks!

melvin-bot[bot] commented 2 weeks ago

📣 @DylanDylann 🎉 An offer has been automatically sent to your Upwork account for the Contributor role 🎉 Thanks for contributing to the Expensify app!

Offer link Upwork job Please accept the offer and leave a comment on the Github issue letting us know when we can expect a PR to be ready for review 🧑‍💻 Keep in mind: Code of Conduct | Contributing 📖

RachCHopkins commented 2 weeks ago

Oops. I'm not sure if that matters 😬

RachCHopkins commented 2 weeks ago

@DylanDylann how do you feel about the proposals here?

DylanDylann commented 2 weeks ago

on my list today

DylanDylann commented 2 weeks ago

It will take me a bit of time to dive deep into this issue because this seems be complicated

DylanDylann commented 1 week ago

First, it needs to be checked why the previousUpdateID for the AddComment response is sent as a higher value than the OpenReport call’s lastUpdatedID/lastClientUpdateID. Fixing this issue should resolve the problem.

@c3024 Could you help to detail this point?

c3024 commented 1 week ago

@DylanDylann

Sure!

https://github.com/user-attachments/assets/637e66ac-85b8-4a36-b6c3-f8fc690d061f

  1. For all write requests, the backend sends a previousUpdateID and a lastUpdateID. In the above video, the lastUpdateID sent back in the OpenReport response is 2468230213. This means the backend considers this ID to be the last time the client device was updated.
  2. Then, when we made an AddComment request, the server’s response returned a previousUpdateID of 2468230219. This means the backend is informing the client that the freshest update it sent previously to all devices is 2468230219. This is larger than the lastUpdateID (2468230213) saved to the device. The backend is suggesting to the client that there are updates between the OpenReport response (2468230213) and the AddComment response (2468230219). However, this is incorrect because it is impossible for there to be any updates in between, as this is the only client making changes to this account. The backend is sending an incorrect previousUpdateID of 2468230219 when it should be sending the lastUpdateID from the OpenReport response, which is 2468230213.
  3. Due to this incorrect previousUpdateID (which is larger than the lastUpdateID), the client thinks it is out of date and pauses applying the AddComment response to Onyx, making a GetOnyxMessages call to retrieve the missing updates. This call returns only the updates between 2468230213 and 2468230219, as can be seen.
  4. The lastUpdateID of a write request response should match the previousUpdateID of the next write request if no other updates have been made by another client to the same report, account, etc.

https://github.com/user-attachments/assets/36aa8a34-43f8-4351-9be5-9c3476b29250

  1. In the above video, I added two more comments. The responses for these AddComment requests have the correct ids. The lastUpdateID of the first comment matches the previousUpdateID of the next comment, so no GetOnyxMessages call is made because the client understands that the chain of updates is intact and nothing is missing.

There is a fix required on the backend for the first AddComment response after the OpenReport response. However, there is also a generic problem in the frontend codebase, as mentioned in the RCA and the second point of my suggested solution. The issue is that the last write response (in this case, AddComment, along with the updates waiting in queuedOnyxUpdates) which triggered the GetMissingOnyxMessages call, is not immediately written after the missing updates are fetched though the SequentialQueue is paused after fetching these missed updates. This happens because there is no code to flush the queuedUpdates unless another response is received from the backend, as outlined in the second point of my solution.

When the next write request gets completed and the persistent requests array is empty, all queuedOnyxMessages get updated. We can see this in the present bug as well: if we send another message, everything updates correctly (the backend also does not send an incorrect previousUpdateID) because the queue is unpaused. After processing this request, when the persisted requests are empty, the queuedOnyxUpdates (which include the previous OpenReport and AddComment responses’ onyxData, successData, etc.) are flushed.

Feel free to ask if there is still something unclear. As you mentioned, this is a fairly complex issue, and I may have missed explaining some parts fully in an effort to keep the comments concise.

DylanDylann commented 1 week ago

@c3024 Thanks for a deep explanation

nkdengineer commented 1 week ago

Updated proposal

mkzie2 commented 1 week ago

@DylanDylann My solution here can fix both issues. @nkdengineer your updated solution is the same as mine.

DylanDylann commented 1 week ago

@nkdengineer

For the reason why GetMissingOnyxMessages API is called I think maybe it's expected from BE.

Why do you think it is expected?

DylanDylann commented 1 week ago

Hi @mkzie2, Thanks for your contribution, your solution and @nkdengineer's solution have the same idea. But the @nkdengineer proposed it first in https://github.com/Expensify/App/issues/46393#issuecomment-2255598546

nkdengineer commented 1 week ago

Why do you think it is expected?

I'm not sure about the logic of it, that's my guess.

DylanDylann commented 1 week ago

Thanks everyone for your inputs

I agree with @c3024 about your RCA

  1. previousUpdateID on the AddComment API is weird (more detail here). But I am not sure about the BE logic. Let's wait for an internal engineer to verify that

  2. We don't have the logic to flush OnyxUpdateQueue when it is unpaused. With this problem, I lean toward @nkdengineer's proposal.

🎀 👀 🎀 C+ reviewed

melvin-bot[bot] commented 1 week ago

Triggered auto assignment to @aldo-expensify, see https://stackoverflow.com/c/expensify/questions/7972 for more details.

DylanDylann commented 1 week ago

@c3024 Please give your concern if there is any problem. I really appreciate your investigation with details of RCA

c3024 commented 1 week ago

That is fine. It is necessary for PersistedRequests to be empty before flushing the queue. While I explained it in the next AddComment part, I missed considering it in my solution.

aldo-expensify commented 1 week ago

hmm, thanks @c3024 for that detailed explanation: https://github.com/Expensify/App/issues/49931#issuecomment-2422259161

I gave it a first read, I didn't understand 100% because I'm not very familiar with all that, but I think I understood that there is a backend bug related to the onyx updates ids being passed around. If this is true, I'd prefer to prioritize and fix that first, and then see again where we are at with this bug.

I'll give it sometime later today to try to reproduce your detailed steps and see if I can understand/see the bug.

melvin-bot[bot] commented 6 days ago

@RachCHopkins, @aldo-expensify, @DylanDylann Uh oh! This issue is overdue by 2 days. Don't forget to update your issues!

aldo-expensify commented 6 days ago

I'll give it sometime later today to try to reproduce your detailed steps and see if I can understand/see the bug.

I fail that day and then I forgot to come back. I'll try to take a better look at this asap

DylanDylann commented 6 days ago

not overdue, waiting for @aldo-expensify

melvin-bot[bot] commented 3 days ago

@RachCHopkins @aldo-expensify @DylanDylann this issue is now 4 weeks old, please consider:

Thanks!

aldo-expensify commented 3 days ago

Sorry, still haven't had time for this yet.

RachCHopkins commented 3 days ago

Waiting on Aldo. Not overdue.

aldo-expensify commented 2 days ago
  1. For all write requests, the backend sends a previousUpdateID and a lastUpdateID. In the above video, the lastUpdateID sent back in the OpenReport response is 2468230213. This means the backend considers this ID to be the last time the client device was updated.
  2. Then, when we made an AddComment request, the server’s response returned a previousUpdateID of 2468230219. This means the backend is informing the client that the freshest update it sent previously to all devices is 2468230219. This is larger than the lastUpdateID (2468230213) saved to the device. The backend is suggesting to the client that there are updates between the OpenReport response (2468230213) and the AddComment response (2468230219). However, this is incorrect because it is impossible for there to be any updates in between, as this is the only client making changes to this account. The backend is sending an incorrect previousUpdateID of 2468230219 when it should be sending the lastUpdateID from the OpenReport response, which is 2468230213.

I wasn't been able to reproduce this with a slow connection:

image image

but then with these steps I was able to reproduce in dev:

  1. Go offline
  2. Open a chat with a user you haven't chatted before (it can be an account that doesn't exist)
  3. Send a message
  4. Go online
  5. Check the OpenReport's last lastUpdateID and the AddComment's previousUpdateID

I'll investigate what are the updates in between and why these are not matching now.

Update: In the end, I was able to reproduce without having to go offline, I think the happens sometimes and sometimes not.

aldo-expensify commented 2 days ago

Hmm... I think the lastUpdateID and previousUpdateID mismatch come from the OpenReport command forwarding a CreateReportAction command to create the whisper "Hi there! This is your first time chatting with...". Sometimes the CreateReportAction can finish fast enough so its updates are considered in the previousUpdateID returned by OpenReport and sometimes not.

Here: https://github.com/Expensify/Auth/blob/d3481121749cc204c713c771a13454dbe0165fbe/auth/lib/Report.cpp#L7751

aldo-expensify commented 2 days ago

Changing this behaviour in the backend sounds tricky to do. We forward the command to avoid making OpenReport faster so there is less DB conflicts.

aldo-expensify commented 2 days ago

Assigning @nkdengineer since the proposal to fix the frontend makes sense to me too.

melvin-bot[bot] commented 2 days ago

📣 @nkdengineer 🎉 An offer has been automatically sent to your Upwork account for the Contributor role 🎉 Thanks for contributing to the Expensify app!

Offer link Upwork job Please accept the offer and leave a comment on the Github issue letting us know when we can expect a PR to be ready for review 🧑‍💻 Keep in mind: Code of Conduct | Contributing 📖

melvin-bot[bot] commented 14 hours ago

⚠️ Looks like this issue was linked to a Deploy Blocker here

If you are the assigned CME please investigate whether the linked PR caused a regression and leave a comment with the results.

If a regression has occurred and you are the assigned CM follow the instructions here.

If this regression could have been avoided please consider also proposing a recommendation to the PR checklist so that we can avoid it in the future.

DylanDylann commented 3 hours ago

@aldo-expensify, Could you help to check this comment? I believe that we need to get the PR merged as soon as possible because there are many similar bugs in other places that also be fixed by our PR