soulteary / docker-chatgpt

< 10MB, One-click self-hosted ChatGPT, allowing access to various data sources and non-OpenAI models.
https://github.com/soulteary/sparrow
Do What The F*ck You Want To Public License
120 stars 21 forks source link

Error with docker URLs? #8

Open dermot-mcg opened 5 months ago

dermot-mcg commented 5 months ago

I get the following issue at the front end (hosting on a VPS, not locally, which has other containers for other services and thus a shared network - Nginx Proxy Manager):

An error occurred. Either the engine you requested does not exist or there was another issue processing your request. If this issue persists please contact us through our help center at help.openai.com.

docker-compose.yml

version: '3'

services:

  chatgpt-client:
    image: soulteary/chatgpt
    restart: always
    environment:
      APP_PORT: 8090
      # the ChatGPT client domain, keep the same with chatgpt-client: `APP_HOSTNAME` option
      APP_HOSTNAME: "http://localhost:8090"
      # the ChatGPT backend upstream, or connect a sparrow dev server `"http://host.docker.internal:8091"`
      APP_UPSTREAM: "http://sparrow:8091"
    networks:
      - nginxproxymanager_default

  sparrow:
    image: soulteary/sparrow
    restart: always
    environment:
      # [Basic Settings]
      # => The ChatGPT Web Client Domain
      WEB_CLIENT_HOSTNAME: "http://chatgpt-client:8090"
      # => Service port, default: 8091
      # APP_PORT: 8091

      # [Private OpenAI API Server Settings] *optional
      # => Enable OpenAI 3.5 API
      ENABLE_OPENAI_API: "on"
      # => OpenAI API Key
      OPENAI_API_KEY: "sk-iTsAsEcReT"
      # => Enable OpenAI API Proxy
      # OPENAI_API_PROXY_ENABLE: "on"
      # => OpenAI API Proxy Address, eg: `"http://127.0.0.1:1234"` or ""
      # OPENAI_API_PROXY_ADDR: "http://127.0.0.1:1234"
    logging:
        driver: "json-file"
        options:
            max-size: "10m"
    networks:
      - nginxproxymanager_default

networks:
  nginxproxymanager_default:
    external: true
maxsyst commented 3 months ago

I meet the same problem like you .

http://localhost:8090/backend-api/conversations?offset=0&limit=2 image

maxsyst commented 3 months ago

any one can suggestion ?

officialdanielamani commented 2 months ago

Seams I may found the solution.

Example for VPS address is "192.168.0.10"

Change the localhost to the VPS address.

version: '3'

services:

  chatgpt-client:
    image: soulteary/chatgpt
    restart: always
    ports:
      - 8090:8090
    environment:
      # service port
      APP_PORT: 8090
      # the ChatGPT client domain, keep the same with chatgpt-client: `APP_HOSTNAME` option
      APP_HOSTNAME: "http://192.168.0.10:8090"
      # the ChatGPT backend upstream, or connect a sparrow dev server `"http://host.docker.internal:8091"`
      APP_UPSTREAM: "http://sparrow:8091"

  sparrow:
    image: soulteary/sparrow
    restart: always
    environment:
      # [Basic Settings]
      # => The ChatGPT Web Client Domain
      WEB_CLIENT_HOSTNAME: "http://192.168.0.10:8090"
      # => Service port, default: 8091
      APP_PORT: 8091

      # [Advanced Settings] *optional
      # => Enable the new UI
      FEATURE_NEW_UI: "on"
      # => Enable history list
      ENABLE_HISTORY_LIST: "on"
      # => Enable i18n
      ENABLE_I18N: "on"
      # => Enable the data control
      ENABLE_DATA_CONTROL: "on"
      # => Enable the model switch
      ENABLE_MODEL_SWITCH: "on"
      # enable the openai official model (include the plugin model)
      # ENABLE_OPENAI_OFFICIAL_MODEL: "on"

      # [Plugin Settings] *optional
      # => Enable the plugin
      # ENABLE_PLUGIN: "on"
      # => Enable the plugin browsing
      # ENABLE_PLUGIN_BROWSING: "on"
      # => Enable the plugin code interpreter
      # ENABLE_PLUGIN_CODE_INTERPRETER: "on"
      # enable the plugin model dev feature
      # ENABLE_PLUGIN_PLUGIN_DEV: "on"

      # [Private OpenAI API Server Settings] *optional
      # => Enable OpenAI 3.5 API
      ENABLE_OPENAI_API: "on"
      # => OpenAI API Key
      OPENAI_API_KEY: "sk-CHANGE ME WITH OWN API KEY"
      # => Enable OpenAI API Proxy
      # OPENAI_API_PROXY_ENABLE: "on"
      # => OpenAI API Proxy Address, eg: `"http://127.0.0.1:1234"` or ""
      # OPENAI_API_PROXY_ADDR: "http://127.0.0.1:1234"

      # [Private Midjourney Server Settings] *optional
      # => Enable Midjourney
      # ENABLE_MIDJOURNEY: "on"
      # => Enable Midjourney Only
      # ENABLE_MIDJOURNEY_ONLY: "on"
      # => Midjourney API Key
      # MIDJOURNEY_API_SECRET: "your-secret"
      # => Midjourney API Address, eg: `"ws://...."`, or `"ws://host.docker.internal:8092/ws"`
      # MIDJOURNEY_API_URL: "ws://localhost:8092/ws"

      # [Private Midjourney Server Settings] *optional
      # => Enable FlagStudio
      # ENABLE_FLAGSTUDIO: "on"
      # => Enable FlagStudio only
      # ENABLE_FLAGSTUDIO_ONLY: "off"
      # => FlagStudio API Key
      # FLAGSTUDIO_API_KEY: "your-flagstudio-api-key"

      # [Private Claude Server Settings] *optional
      # => Enable Claude
      # ENABLE_CLAUDE: "on"
      # => Enable Claude Only
      # ENABLE_CLAUDE_ONLY: "on"
      # => Claude API Key
      # CLAUDE_API_SECRET: "your-secret"
      # => Claude API Address, eg: `"ws://...."`, or `"ws://host.docker.internal:8093/ws"`
      # CLAUDE_API_URL: "ws://localhost:8093/ws"

    logging:
        driver: "json-file"
        options:
            max-size: "10m"

After that I go to the VPS page where the docker is enable:

If ENABLE_OPENAI_OFFICIAL_MODEL is enable, error The administrator has disabled the export capability of this model. will show up.

To change model just use /?model=gpt-4 for GPT-4 /?model=gpt-3 for GPT-3

Screenshot 2024-04-20 211757

Quick note: Ctrl R the page after update the docker. Try on incognito tab with no extension to rule out problem extension blocking request.