huggingface / chat-ui

Open source codebase powering the HuggingChat app
https://huggingface.co/chat
Apache License 2.0
7.52k stars 1.1k forks source link

Proxy chat-ui by nginx subpath #1346

Closed ljw20180420 closed 3 months ago

ljw20180420 commented 3 months ago
services:
  test_chat-ui-db:
    depends_on:
      - test_Phi-3-mini-4k-instruct-gguf
    restart: always
    container_name: test_chat-ui-db
    image: ghcr.io/huggingface/chat-ui-db
    ports:
      - "3000:3000"
    volumes:
      - "./.env.local:/app/.env.local"
      - "./chat-ui/data:/data"

  test_Phi-3-mini-4k-instruct-gguf:
    restart: always
    container_name: test_Phi-3-mini-4k-instruct-gguf
    image: ghcr.io/ggerganov/llama.cpp:server
    volumes:
      - "./chat-ui/llama.cpp:/models"
    command: -m /models/Phi-3-mini-4k-instruct-q4.gguf --host 0.0.0.0 -c 4096 --path /chat-ui-db

  nginx:
    depends_on:
      - test_chat-ui-db
    restart: always
    container_name: test_nginx
    image: nginx:latest
    ports:
      - "8000:80"
    volumes:
      - "./nginx.conf/:/etc/nginx/conf.d/app.conf:ro"
      - "./nginx/html/:/usr/share/nginx/html/:ro"

This is .env.local

MODELS=`[
  {
    "name": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "tokenizer": "microsoft/Phi-3-mini-4k-instruct-gguf",
    "preprompt": "",
    "chatPromptTemplate": "<s>{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\n{{content}}<|end|>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}<|end|>\n{{/ifAssistant}}{{/each}}",
    "parameters": {
      "stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"],
      "temperature": 0.7,
      "max_new_tokens": 1024,
      "truncate": 3071
    },
    "endpoints": [{
      "type" : "llamacpp",
      "baseURL": "http://test_Phi-3-mini-4k-instruct-gguf:8080",
      "accessToken": "abc"
    }],
  },
]`

This is nginx configuration.

client_max_body_size 10G;

server {
    listen 80 default_server;
    # listen 443 ssl default_server;
    server_name localhost 127.0.0.1;
    # ssl_certificate /etc/letsencrypt/live/wulab/fullchain.pem;
    # ssl_certificate_key /etc/letsencrypt/live/wulab/privkey.pem;

    root /usr/share/nginx/html;

    location / {
        index homepage.html;
    }

    location /favicon.ico/ {
        index favicon.ico;
    }

    location /chat-ui-db/ {
        proxy_pass http://test_chat-ui-db:3000/;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Forwarded-Host $host;
        proxy_set_header X-Forwarded-Prefix /chat-ui-db/;
    }
}

When I access chat-ui by localhost:3000, and chat Phi-3-mini-4k-instruct, it works. However, if I access chat-ui by localhost:8000/chat-ui-db, and chat Phi-3, it reports

POST /chat-ui-db/conversation/66965bf554757f85f2159a5d HTTP/1.1" 403

How can I proxy chat-ui by nginx subpath?

nsarrazin commented 3 months ago

Make sure to add a correct PUBLIC_ORIGIN env variable! That's probably your issue here

See code or HuggingChat config

ljw20180420 commented 3 months ago

Thank you. I set PUBLIC_ORIGIN as http://localhost:8000. But It not works.

services:
  test_Phi-3-mini-4k-instruct-gguf:
    restart: always
    container_name: test_Phi-3-mini-4k-instruct-gguf
    image: ghcr.io/ggerganov/llama.cpp:server
    volumes:
      - "./chat-ui/llama.cpp:/models"
    command: -m /models/Phi-3-mini-4k-instruct-q4.gguf --host 0.0.0.0 -c 4096
    environment:
      PUBLIC_ORIGIN: "http://localhost:8000"
nsarrazin commented 3 months ago

PUBLIC_ORIGIN should be on the chat-ui service, not the llama.cpp server 😄

ljw20180420 commented 3 months ago

It works! Thank you. 😄