WongSaang / chatgpt-ui

A ChatGPT web client that supports multiple users, multiple languages, and multiple database connections for persistent data storage. Provides Docker images and quick deployment scripts.
https://wongsaang.github.io/chatgpt-ui/
MIT License
1.51k stars 333 forks source link

Making concurrent requests #100

Closed StitiFatah closed 1 year ago

StitiFatah commented 1 year ago

Hi,

First thanks for this project.

It seems that we can't really make concurrent requests, if we open 2 windows and submit a prompt in each the responses aren't generated simultaneously, the second one has to wait for the first one to finish.

I've tested in the playground and it's not a limitation of openAI's free trial.

Imo it negates the multi user functionality since 2 people can't use it at the same time.

spencerwongfeilong commented 1 year ago

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

GOvEy1nw commented 1 year ago

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

I agree that concurrent requests is a must, but I don't think it's a limitation of the API, as the point of the API is to be able to use it en-mass, so it's likely something else that's blocking the concurrent request. Adding additional API keys might not solve the issue, and if it did, it'd be more of a work-around than a solution.

StitiFatah commented 1 year ago

I second to this. May I suggest the project use more than 1 api key to reduce the frequency of concurrent request calls. For example if the first api is being used to fulfil a request, the second api can be used to satisfy the concurrent request.

Or the concurrent request can wait until the first one has been fulfilled.

For consideration please.

I agree that concurrent requests is a must, but I don't think it's a limitation of the API, as the point of the API is to be able to use it en-mass, so it's likely something else that's blocking the concurrent request. Adding additional API keys might not solve the issue, and if it did, it'd be more of a work-around than a solution.

Yes it's not, after all the API is also meant to make products from it.

I was thinking about a limitation of the free trial but it isn't either as I said in my original message.

I'll try to look at the Django server code later today.

spencerwongfeilong commented 1 year ago

@WongSaang does the latest update fix the concurrent request problem?

WongSaang commented 1 year ago

https://github.com/WongSaang/chatgpt-ui/releases/tag/v2.3.6

Is this what you're looking for?

spencerwongfeilong @.***> 于 2023年3月30日周四 22:38写道:

@WongSaang https://github.com/WongSaang does the latest update fix the concurrent request problem?

— Reply to this email directly, view it on GitHub https://github.com/WongSaang/chatgpt-ui/issues/100#issuecomment-1490422400, or unsubscribe https://github.com/notifications/unsubscribe-auth/ALAX6FARJU2LE76KV5Y7AGTW6WLE3ANCNFSM6AAAAAAWL3P54M . You are receiving this because you were mentioned.Message ID: @.***>

spencerwongfeilong commented 1 year ago

@WongSaang

I was not clear in my question and i apologise.

In the release v2.3.6, the changes are

  1. Support conversation routing, isolating each conversation and supporting simultaneous chat in multiple conversations
  2. Localized prompt support for conversation title generation
  3. Fix some variables with ambiguous naming

What does "Support conversation routing, isolating each conversation and supporting simultaneous chat in multiple conversations" mean?

WongSaang commented 1 year ago

This means that you can chat in multiple conversations at the same time

spencerwongfeilong commented 1 year ago

@WongSaang

I tried logging in to 2 differents accounts and send different questions to openai.

Both questions were not answered simultaneously. I had the impression that v2.3.6 fixed that.

WongSaang commented 1 year ago

Are you working in two tabs simultaneously?

spencerwongfeilong commented 1 year ago

Are you working in two tabs simultaneously?

One tab in incognito (account A) And One tab in normal (account B)

WongSaang commented 1 year ago

Hello, after investigation, I found that the issue was caused by the backend service. The backend uses gunicorn for hosting, and by default, it only has one worker. This can cause blocking when multiple requests are made at the same time.

Solution: An environment variable SERVER_WORKERS has been added to control the number of workers in the backend. The default worker number is 3. If 3 workers are not enough, you can configure the environment variable under wsgi-server. We recommend setting the number of workers to (2 x $num_cores) + 1, where $num_cores is the number of cores allocated for your container. For example:

backend-wsgi-server:
    image: wongsaang/chatgpt-ui-wsgi-server:latest
    environment:
      - SERVER_WORKERS=5
spencerwongfeilong commented 1 year ago

works great. Thank you!!

WongSaang commented 1 year ago

👌