Open Jacoby1218 opened 1 year ago
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
Support for Intel ARC would be greatly appreciated.
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
Support for Intel ARC would be greatly appreciated. Is this in the works or are you at least considering working on this?
+1 Best GPU with lots of Vram "normal" people can easily afford. Should get more love in the AI world.
IPEX and pipe.to('xpu') is the secret sauce to doing this. There should be a way to glue it into InvokeAI....
# syntax=docker/dockerfile:1.6
FROM intel/intel-extension-for-pytorch:xpu-jupyter
USER root
ARG DEBIAN_FRONTEND=noninteractive
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip python3 -m pip install diffusers transformers accelerate
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip python3 -m pip install jupyter
ENTRYPOINT [ "jupyter", "notebook", "--allow-root", "--ip", "0.0.0.0", "--port", "9999" ]
docker build --tag arcsd .
docker run -it --rm --device /dev/dri -p 9999:9999 -v "$PWD/data:/data:rw" arcsd
Put this in a notebook:
import intel_extension_for_pytorch as ipex
import torch
from diffusers import StableDiffusionPipeline
# check Intel GPU
print(ipex.xpu.get_device_name(0))
#ignore the warning about image libraries..
# load the Stable Diffusion model
pipe = StableDiffusionPipeline.from_pretrained("/data/stable-diffusion-v1-5",
safety_checker=None,
torch_dtype=torch.bfloat16,
use_safetensors=True)
# move the model to Intel Arc GPU
pipe = pipe.to("xpu")
# model is ready for submitting queries
for i in range(2):
for image in pipe("The personification of spring in the form of a gorgeous golden retriever with a smile, (((gorgeous golden retriever))), highly detailed, sharp focus, sun rays, trending on artstation, 4k", num_images_per_prompt=3, height=512, width=512).images:
display(image)
The support will be awesome, and this will be opening a door to use this software with cheap devices with enough vram or high like a770
InvokeAI uses ubuntu:23.04 as a base and adds in libraries, rather than using accelerator-vendor-provided container images: https://github.com/invoke-ai/InvokeAI/blob/2e404b7cca87865ee1f02faa8707ceb711120096/docker/Dockerfile#L5C21-L5C27
Chasing down working dependencies for Intel GPU aka XPU, below may be useful for adding support into InvokeAI:
docker build --tag xpu:workaround - <<'EODOCKERFILE'
# syntax=docker/dockerfile:1.6
# this is what invokeai docker is based on
FROM ubuntu:23.04
#FROM python:3.11-bookworm
ARG DEBIAN_FRONTEND=noninteractive
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked <<'OSPREP'
apt update
apt upgrade -y
apt install -y python3-venv wget gpg
OSPREP
# PEP668
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN pip install --upgrade pip setuptools
# RUN pip install mkl
# is broken? due to https://github.com/oneapi-src/oneMKL/issues/64#issuecomment-812632736
# therefore fall back to apt package
# https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html?operatingsystem=linux&distributions=aptpackagemanager
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked <<'EOMKL'
wget -O- https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS.PUB \ | gpg --dearmor | tee /usr/share/keyrings/oneapi-archive-keyring.gpg > /dev/null
echo "deb [signed-by=/usr/share/keyrings/oneapi-archive-keyring.gpg] https://apt.repos.intel.com/oneapi all main" | tee /etc/apt/sources.list.d/oneAPI.list
apt update
apt install -y intel-basekit
EOMKL
# https://github.com/intel/intel-extension-for-pytorch/issues/412#issuecomment-1715605398
RUN --mount=type=cache,target=/root/.cache/pip python3 -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel_extension_for_pytorch==2.0.110+xpu -f 'http://ec2-52-27-27-201.us-west-2.compute.amazonaws.com/ipex-release.php?device=xpu&repo=us&release=stable' --trusted-host ec2-52-27-27-201.us-west-2.compute.amazonaws.com
# libgomp.so.1: cannot open shared object file: No such file or directory
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y libgomp1
# libze_loader.so.1: cannot open shared object file: No such file or directory
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y libze1
# torch.xpu.is_available() false
RUN --mount=target=/var/lib/apt/lists,type=cache,sharing=locked --mount=target=/var/cache/apt,type=cache,sharing=locked apt install -y intel-opencl-icd
COPY --chmod=0755 <<'EOENTRY' /docker-entrypoint.sh
#!/bin/bash
# libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory
source /opt/intel/oneapi/setvars.sh
python3
EOENTRY
ENTRYPOINT "/docker-entrypoint.sh"
EODOCKERFILE
docker run -it --rm --device /dev/dri --user 0 xpu:workaround
:: initializing oneAPI environment ...
docker-entrypoint.sh: BASH_VERSION = 5.2.15(1)-release
args: Using "$@" for setvars.sh arguments:
:: advisor -- latest
:: ccl -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpcpp-ct -- latest
:: dpl -- latest
:: ipp -- latest
:: ippcp -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: vtune -- latest
:: oneAPI environment initialized ::
Python 3.11.4 (main, Jun 9 2023, 07:59:55) [GCC 12.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import intel_extension_for_pytorch as ipex
/opt/venv/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
>>> import torch
>>> torch.xpu.is_available()
True
>>>
>>> print(ipex.xpu.get_device_name(0))
Intel(R) Graphics [0x56a0]
This is open for a willing contributor to tackle!
I am currently running invokeai with this patch I have written. Using the following config, I am able to do basic image generation with my intel arc a750, but I had varying success when things got a bit more complicated. I suspect the 4GB allocation limit is to blame there. Strangely on my machine atleast the new xe kernel driver seemed to have less problems than the i915 driver.
device = "xpu";
precision = "bfloat16";
lazy_offload = true;
log_memory_usage = true;
log_level = "info";
attention_type = "sliced";
attention_slice_size = 4;
sequential_guidance = true;
force_tiled_decode = false;
Is there an existing issue for this?
Contact Details
Jacoby#1218
What should this feature add?
This feature would add GPU acceleration support for Intel ARC A-Series cards, and the PRO versions of those, in addition to Intel Iris XE and some newer iGPUs. (i have no idea what the support looks like in that regard, as intel doesn't even seem to know what they support, but i do know at least Iris and ARC are supported)
Alternatives
Stay on PyTorch, and port this extension https://github.com/intel/intel-extension-for-pytorch to Windows platforms: i have no idea how much time that would take for you guys, but it may be more useful in the future. Currently, there are no other options for Intel oneAPI support on windows besides TensorFlow (with the intel extension).
Aditional Content
No response