microsoft / chat-copilot

MIT License
2k stars 686 forks source link

Looking for your backend - ID Token Loop #545

Closed rmaster-smirth closed 5 months ago

rmaster-smirth commented 10 months ago

I am able to run this locally, however when I deploy it using the deploy-webapi.ps1 script it remains on the "Looking for your backend" screen. After the initial start, the browser console shows a continuous loop of the following 7 events:

  1. CacheManager:getIdToken - Returning id token
  2. CacheManager:getAccessToken - Returning access token
  3. CacheManager:getRefreshToken - returning refresh token
  4. Emitting event: msal:acquireTokenSuccess
  5. CacheManager:getIdToken - Returning id token
  6. CacheManager:getIdToken - Returning id token
  7. Emitting event: msal:acquireTokenStart

image

The frontend app registration does have the URL of the app in the redirect URIs for the single-page application and /healthz is showing "Healthy".

rmaster-smirth commented 10 months ago

To add some additional information, if I redeploy the app while it's going through the loop it causes an error from the WebSocket closing, eventually reconnects after the app has fully deployed, and will progress past the "Looking for your backend" message. From there I'm able to start a new chat but when I send messages I receive "Unable to generate bot response. Details: Error: 500: Internal Server Error".

In the logs it shows that the Logon Method and User are anonymous however in the frontend it does show my user account as logged in in the upper right.

crickman commented 10 months ago

@gitri-ms - Can you please weigh in on if this seems related to auth config?

dehoward commented 10 months ago

@rmaster-smirth can you please post a screenshot from the Network tab of the browser? I'm specifically interested in knowing the response of the /maintenanceStatus call.

rmaster-smirth commented 10 months ago

Here is the output of the network tab: network_tab

And here is the output of the headers from one of the maintenanceStatus calls: maintenanceStatus_headers

The response is always: {"title":"Migrating Chat Memory","message":"An upgrade requires that all non-document memories be migrated. This may take several minutes...","note":"Note: All document memories will need to be re-imported."}

The response from the bad request token call is the following, but I believe this is related to #538: {"error":"invalid_grant","error_description":"AADSTS700084: The refresh token was issued to a single page app (SPA), and therefore has a fixed, limited lifetime of 1.00:00:00, which cannot be extended. It is now expired and a new sign in request must be sent by the SPA to the sign in page. The token was issued on 2023-10-26T07:54:34.3169720Z.\r\nTrace ID: xxx\r\nCorrelation ID: xxx\r\nTimestamp: 2023-10-27 13:16:03Z","error_codes":[700084],"timestamp":"2023-10-27 13:16:03Z","trace_id":"xxx","correlation_id":"xxx","error_uri":"https://login.microsoftonline.com/error?code=700084","suberror":"bad_token"}

dehoward commented 10 months ago

@rmaster-smirth from @crickman's comment in #523:

Can you check if you've hit the maximum number of indexes? Migration won't be able to proceed unless there is room to create two new indexes.

rmaster-smirth commented 10 months ago

The search service has zero indexes - let me know if that's not what you referring to.

glahaye commented 10 months ago

If your deployment is stuck for a long time in a state where a memory migration is required, make sure your Azure Cognitive Search instance has not reached its maximum number of indexes. To verify that, select your Azure Cognitive Search instance in the Azure portal, then click on 'Indexes'. If you see 15 indexes, delete at least 2 of them and your deployment should now be able to proceed with memory migration.

glahaye commented 9 months ago

@rmaster-smirth Did the last message help you solve the problem? Are you still experiencing ir?

daleche-sh commented 8 months ago

@glahaye , I experience the same issue, any advice? Does chat-copilot solution require Azure Cognitive Search instance? I didn't install any Azure Cognitive Search instance...

image

glahaye commented 8 months ago

@daleche-sh When deployed, you do indeed need either Azure Cognitive Search or Qdrant to be deployed. I STRONGLY suggest you use Azure Cognitive Search.

daleche-sh commented 8 months ago

@daleche-sh When deployed, you do indeed need either Azure Cognitive Search or Qdrant to be deployed. I STRONGLY suggest you use Azure Cognitive Search.

@glahaye , thank you for the reply, I followed the steps described in this doc to run the app locally, is Azure Cognitive Search or Qdrant still required? https://learn.microsoft.com/en-us/semantic-kernel/chat-copilot/getting-started?tabs=Windows%2CPowershell, this doc doesn't have the instruction to ask to install ACS or Qdrant.

glahaye commented 8 months ago

@daleche-sh Locally, you don't need to set up ACS or Qdrant.

Are you experiencing the issue locally??

polverigianimarco commented 7 months ago

@daleche-sh Locally, you don't need to set up ACS or Qdrant.

Are you experiencing the issue locally??

Hi, I am experiencing this same issue locally, any advice?

glahaye commented 7 months ago

There's a PR out to fix this whole problem: #791

I anticipate it to be merged in today.