Atinoda / text-generation-webui-docker

Docker variants of oobabooga's text-generation-webui, including pre-built images.
GNU Affero General Public License v3.0
391 stars 75 forks source link

Intel Arc support #38

Open Atinoda opened 8 months ago

Atinoda commented 8 months ago

Intel Arc GPUs have their own images now, according to the developments in the upstream project. I do not have the hardware to test them - so please give them a go! Reports are welcomed.

ksullivan86 commented 6 months ago

I was testing this out but I couldn't get this to work with my a770 16gb, I was able to get cpu only(on the ARC docker image) but even when I had cpu unchecked and I added 10 GPU layers to the only the cpu would be active when in use. I am guessing I am doing something wrong. I am happy to help if I can...just let me know what I can do?

I am wondering if the problem is because of the integrated gpu in the 14700k is what causing the a770 not to be used, I see this menthined a few times.

Atinoda commented 6 months ago

@ksullivan86 - thank you for testing it out and reporting back your experiences! I'll check into what is required to make the card available to the container. Would you mind sharing what OS you are using, and any changes that you made to the docker-compose.yml? I would be keen to help you get this up and running, then we can share the results here for other people too.

ksullivan86 commented 6 months ago

I am using unraid 6.12.8 but I am using custom kernel 6.7(thor) for arc support. here is my compose file, I dont think there is anything special, I do have - /dev/dri:/dev/dri setup. for access to arc but I think that is also going to add the iGPU from my 14700k.

Atinoda commented 6 months ago

Sorry for my delayed reply, and thank you for sharing the information! Although I don't have experience with Unraid, there has been a successful deployment story at #27 with AMD hardware, and you have already got it running with CPU so we'll proceed on the assumption that everything is working well. Do you know if docker runs with root privileges on Unraid? It may be necessary to grant additional group membership if the runner account is limited in privileges.

Can you please try adding using the following docker-compose.yml? I have added parameters that blow the doors off with regards to security and container isolation - but the idea is to see if this works, then pare back afterwards:

version: "3"
services:
  text-generation-webui:
    image: atinoda/text-generation-webui:default-arc  # Specify variant as the :tag
    container_name: text-generation-webui-arc
    network_mode: docker_network
    environment:
      - TZ=America/Los_Angeles
      - EXTRA_LAUNCH_ARGS="--listen --verbose" # Custom launch args (e.g., --model MODEL_NAME)
#      - BUILD_EXTENSIONS_LIVE="silero_tts whisper_stt" # Install named extensions during every container launch. THIS WILL SIGNIFICANLTLY SLOW LAUNCH TIME.
      - PUID=99
      - PGID=100
    ports:
      - 7867:7860  # Default web port
      - 5200:5000  # Default API port
      - 5205:5005  # Default streaming port
      - 5201:5001  # Default OpenAI API extension port

   # labels:
     # traefik.enable: true  #  allows traefik reverse proxy to see the app
     # traefik.http.routers.text-generation-webui-cpu.entryPoints: https 
     # traefik.http.services.text-generation-webui-cpu.loadbalancer.server.port: 7860 # sepecifies port for traefik to route
    volumes:
      - /mnt/user/ai/text_generation_webui/models:/app/models
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/characters:/app/characters
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/loras:/app/loras
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/presets:/app/presets
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/prompts:/app/prompts
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/training:/app/training
      - /mnt/user/ai/appdata/text_generation_webui/text-generation-webui-arc/extensions:/app/extensions  # Persist all extensions
#      - ./config/extensions/silero_tts:/app/extensions/silero_tts  # Persist a single extension
    logging:
      driver:  json-file
      options:
        max-file: "3"   # number of files or file count
        max-size: '10m'
    restart: unless-stopped
    # NEW PARAMS:
    group_add:
      - video
    tty: true
    ipc: host
    devices:
      - /dev/kfd
      - /dev/dri 
    cap_add: 
      - SYS_PTRACE
    security_opt:
      - seccomp=unconfined
FirestarDrive commented 5 months ago

Just got the WebUI to work on Arc A770M and load GGUF models onto the GPU. Churns out 12 tok/s on OpenOrca 7B Q5 (pretty good for a laptop card, huh?)

Turns out, llama-cpp-python needs to be built using Intel's compiler to recognise the GPU as per NineMeowICT's excellent findings.

Here's my build, perhaps, it can be integrated with yours?

Atinoda commented 4 months ago

Hi @FirestarDrive, thank you for the report and the links to the working build! I will use this information to develop the arc variant to get it up and running.