Open Tom-Neverwinter opened 3 months ago
I haven't tested it but it should work just fine. You could try replacing WebUI with oobabooga service in docker-compose.yml. Then point oobabooga to openAI api at http://openresty:80/v1
Or you could remove the WebUI from docker-compose.yml and just run remaining 2 services. Expose port 80 from openresty service on your host or add it to docker network from your separate oobabooga docker-compose project. This docker-compose.yml config will expose the openAI api endpoint on all your network interfaces, then just connect to it from oobabooga:
openresty:
image: openresty/openresty:latest
ports:
- "80:80"
volumes:
- '${MODEL_DIR}:${MODEL_DIR}'
- ./openresty/app:/app
- ./openresty/app/lib/resty:/usr/local/openresty/site/lualib/resty
- ./openresty/conf:/usr/local/openresty/nginx/conf
- ./data/restylogs:/usr/local/openresty/nginx/logs
env_file:
- .env
environment:
- NGINX_CONF_PATH=/usr/local/openresty/nginx/conf
depends_on:
- llamacpp
llamacpp:
build:
context: llamacpp
args:
UBUNTU_VERSION: "${UBUNTU_VERSION}"
CUDA_VERSION: "${CUDA_VERSION}"
CUDA_DOCKER_ARCH: "${CUDA_DOCKER_ARCH}"
pid: "host"
env_file:
- .env
environment:
DEFAULT_MODEL_CONFIG: /model-config/default-config.yml #OPTIONAL
ports:
- "127.0.0.1:8081:8081" #for debugging
- "127.0.0.1:5000:5000" #for debugging
volumes:
- '${MODEL_DIR}:${MODEL_DIR}'
- './llamacpp/app:/app'
- ./data/llamacpp-logs:/llamacpp-logs
- './model-config:/model-config'
command: ["/usr/bin/python3","api.py"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
would this work with oobabooga webui?