simonlui / Docker_IPEX_ComfyUI

Stable Diffusion ComfyUI Docker/OCI Image for Intel Arc GPUs
Apache License 2.0
22 stars 2 forks source link

Installing steps issue #3

Closed gmbhneo closed 8 months ago

gmbhneo commented 9 months ago

This error occurs upon using "docker build -t ipex-arc-comfy:latest -f Dockerfile ."

===

`3.540 Some packages could not be installed. This may mean that you have 3.540 requested an impossible situation or if you are using the unstable 3.540 distribution that some required packages have not yet been created 3.540 or been moved out of Incoming. 3.540 The following information may help to resolve the situation: 3.540 3.540 The following packages have unmet dependencies: 3.757 intel-oneapi-runtime-dpcpp-cpp : Depends: intel-oneapi-runtime-compilers (= 2023.2.1-16) but it is not going to be installed 3.757 Depends: intel-oneapi-runtime-dpcpp-cpp-common (= 2023.2.1-16) but 2023.2.2-47 is to be installed 3.757 Depends: intel-oneapi-runtime-opencl (= 2023.2.1-16) but 2023.2.2-47 is to be installed 3.763 E: Unable to correct problems, you have held broken packages.

Dockerfile:26

25 | ARG CMPLR_COMMON_VER=2023.2.1 26 | >>> RUN apt-get update && \ 27 | >>> apt-get install -y --no-install-recommends --fix-missing \ 28 | >>> intel-oneapi-runtime-dpcpp-cpp=${DPCPP_VER} \ 29 | >>> intel-oneapi-runtime-mkl=${MKL_VER} \ 30 | >>> intel-oneapi-compiler-shared-common-${CMPLR_COMMON_VER}=${DPCPP_VER} 31 |

ERROR: failed to solve: process "/bin/sh -c apt-get update && apt-get install -y --no-install-recommends --fix-missing intel-oneapi-runtime-dpcpp-cpp=${DPCPP_VER} intel-oneapi-runtime-mkl=${MKL_VER} intel-oneapi-compiler-shared-common-${CMPLR_COMMON_VER}=${DPCPP_VER}" did not complete successfully: exit code: 100`

gmbhneo commented 9 months ago

Even when running in WSL or windows with docker directly.

simonlui commented 9 months ago

This has been fixed, I believe. oneAPI has been updated so the Docker image needed to be updated as well. Let me know if the latest version works for you since I can not currently test due to needing wait for some things on my side with my Linux install that is pending upstream fixes but it does get to the point where you hit in the prior issue where IPEX isn't detected.

gmbhneo commented 9 months ago

Now I get the following issue using this command as you provided:

docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server --network=host -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v C:/ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest

`What's Next?

  1. Sign in to your Docker account → docker login
  2. View a summary of image vulnerabilities and recommendations → docker scout quickview PS C:\ComfyUI\Docker_IPEX_ComfyUI> docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server --network=host -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v C:/ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest WARNING: Published ports are discarded when using host network mode fatal: destination path '.' already exists and is not an empty directory. fatal: not in a git directory Looking in links: https://developer.intel.com/ipex-whl-stable-xpu ERROR: Could not find a version that satisfies the requirement torch==2.0.1a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0, 1.13.1, 2.0.0, 2.0.1, 2.1.0, 2.1.1) ERROR: No matching distribution found for torch==2.0.1a0 ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt' No command to use ipexrun to launch ComfyUI. Launching normally. python3: can't open file '/ComfyUI/main.py': [Errno 2] No such file or directory`
gmbhneo commented 9 months ago

After starting the docker container straight from docker itself, I get the following error:

2023-11-27 09:36:36 No command to use ipexrun to launch ComfyUI. Launching normally. 2023-11-27 09:36:37 Traceback (most recent call last): 2023-11-27 09:36:37 File "/ComfyUI/main.py", line 72, in <module> 2023-11-27 09:36:37 import execution 2023-11-27 09:36:37 File "/ComfyUI/execution.py", line 12, in <module> 2023-11-27 09:36:37 import nodes 2023-11-27 09:36:37 File "/ComfyUI/nodes.py", line 20, in <module> 2023-11-27 09:36:37 import comfy.diffusers_load 2023-11-27 09:36:37 File "/ComfyUI/comfy/diffusers_load.py", line 4, in <module> 2023-11-27 09:36:37 import comfy.sd 2023-11-27 09:36:37 File "/ComfyUI/comfy/sd.py", line 5, in <module> 2023-11-27 09:36:37 from comfy import model_management 2023-11-27 09:36:37 File "/ComfyUI/comfy/model_management.py", line 114, in <module> 2023-11-27 09:36:37 total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) 2023-11-27 09:36:37 File "/ComfyUI/comfy/model_management.py", line 83, in get_torch_device 2023-11-27 09:36:37 return torch.device(torch.cuda.current_device()) 2023-11-27 09:36:37 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 769, in current_device 2023-11-27 09:36:37 _lazy_init() 2023-11-27 09:36:37 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 298, in _lazy_init 2023-11-27 09:36:37 torch._C._cuda_init() 2023-11-27 09:36:37 RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx

simonlui commented 9 months ago

It's not just you, Intel broke their install URL for Pytorch. I made a workaround for it. Can you delete the image and remake it again and see if the issue is solved?

gmbhneo commented 9 months ago

Now I got the following error:

Collecting mpmath>=0.19 Downloading mpmath-1.3.0-py3-none-any.whl (536 kB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 KB 35.8 MB/s eta 0:00:00 Installing collected packages: mpmath, urllib3, typing-extensions, sympy, psutil, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, intel-extension-for-pytorch, torch, torchvision Successfully installed MarkupSafe-2.1.3 certifi-2023.11.17 charset-normalizer-3.3.2 filelock-3.13.1 idna-3.6 intel-extension-for-pytorch-2.0.110+xpu jinja2-3.1.2 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.2 pillow-10.1.0 psutil-5.9.6 requests-2.31.0 sympy-1.12 torch-2.0.1a0+cxx11.abi torchvision-0.15.2a0+cxx11.abi typing-extensions-4.8.0 urllib3-2.1.0 ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt' No command to use ipexrun to launch ComfyUI. Launching normally. python3: can't open file '/ComfyUI/main.py': [Errno 2] No such file or directory

I get the feeling, that my command to run is not working correctly. I am using:

docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server --network=host -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v C:/ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest

When my Folder created is at C:\ComfyUI\Docker_IPEX_ComfyUI. Do I have to change the command?

gmbhneo commented 9 months ago

Also, running the docker contaienr itself without any command trough docker windows application, I get the following error:

2023-11-28 08:52:23 No command to use ipexrun to launch ComfyUI. Launching normally. 2023-11-28 08:52:24 /deps/venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality fromtorchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you havelibjpegorlibpnginstalled before buildingtorchvisionfrom source? 2023-11-28 08:52:24 warn( 2023-11-28 08:52:24 Traceback (most recent call last): 2023-11-28 08:52:24 File "/ComfyUI/main.py", line 72, in <module> 2023-11-28 08:52:24 import execution 2023-11-28 08:52:24 File "/ComfyUI/execution.py", line 12, in <module> 2023-11-28 08:52:24 import nodes 2023-11-28 08:52:24 File "/ComfyUI/nodes.py", line 20, in <module> 2023-11-28 08:52:24 import comfy.diffusers_load 2023-11-28 08:52:24 File "/ComfyUI/comfy/diffusers_load.py", line 4, in <module> 2023-11-28 08:52:24 import comfy.sd 2023-11-28 08:52:24 File "/ComfyUI/comfy/sd.py", line 5, in <module> 2023-11-28 08:52:24 from comfy import model_management 2023-11-28 08:52:24 File "/ComfyUI/comfy/model_management.py", line 114, in <module> 2023-11-28 08:52:24 total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) 2023-11-28 08:52:24 File "/ComfyUI/comfy/model_management.py", line 83, in get_torch_device 2023-11-28 08:52:24 return torch.device(torch.cuda.current_device()) 2023-11-28 08:52:24 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 674, in current_device 2023-11-28 08:52:24 _lazy_init() 2023-11-28 08:52:24 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init 2023-11-28 08:52:24 raise AssertionError("Torch not compiled with CUDA enabled") 2023-11-28 08:52:24 AssertionError: Torch not compiled with CUDA enabled

simonlui commented 9 months ago

I need some time to work on this and I can now test again now that my Linux platform has fixed itself with detecting my Arc GPU in IPEX and SYCL, Intel's dependencies are not playing nice with one another with oneAPI 2024 and I need to find out why or find a version to revert to. I did fix the edge case you brought up in the 1st post of yours, the Docker image is supposed to play nice if it detects the repository is or is not there. Will update when I can figure out what is going on. Ideally, Intel should just drop a new IPEX for XPU like they have been hinting like this week or something but all we know is that it should be by the end of the year which isn't saying much and that it's going to support Pytorch 2.1 which should bring it more up to date on capabilities.

simonlui commented 9 months ago

Never mind, I solved it with the first version I tried. Should be fixed now as I can run images with it but best to get confirmation from you before closing the issue.

gmbhneo commented 9 months ago

Still the same errors.

simonlui commented 9 months ago

You will need to deep clean everything most likely if you are still receiving the same issue, You should be able to clear everything with the command docker system prune -a which should get rid of everything on your system if you don't mind clearing away everything and you aren't using Docker for anything else. Then rebuild the image.

gmbhneo commented 9 months ago

grafik

So something changed but it is still not working for me.

simonlui commented 9 months ago

Your screenshot finally confirmed the issue. This also tells me you did not read the documentation carefully. Please do so next time. You are mounting the ComfyUI folder/location incorrectly so the script isn't running anything at all. It needs to either be a git clone of the repository directly or an empty space where the repository can be cloned into. For example, if you did git clone https://github.com/comfyanonymous/ComfyUI in the C:/ location or C drive root directory, you should specify C:/ComfyUI:/ComfyUI:Z to point to it correctly as C:/ComfyUI is where the location of ComfyUI is.

gmbhneo commented 9 months ago

Okay, so after doing this, I get this error:

PS C:\ComfyUI\Docker_IPEX_ComfyUI> docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server-2 --network=host -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v C:/ComfyUI/ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest WARNING: Published ports are discarded when using host network mode Activating python venv. No command to use ipexrun to launch ComfyUI. Launching normally. Traceback (most recent call last): File "/ComfyUI/main.py", line 69, in <module> import comfy.utils File "/ComfyUI/comfy/utils.py", line 5, in <module> import safetensors.torch ModuleNotFoundError: No module named 'safetensors'

simonlui commented 9 months ago

Sorry for the late reply. The dependencies didn't all install correctly, but it is also misdiagnosing that your container is launching for the first time. I made a change to better do that detection. See if doing the system prune and remaking the image works for you.

gmbhneo commented 9 months ago

Okay, looks like the installation process went well.

Next issue is, I am not able to connect to the GUI through the giving IP. I just get a connection error on my browser.

gmbhneo commented 9 months ago

grafik Docker ps is also showing, that there is no port assigned.

grafik But Docker is showing correctly.

using docker inspect comfy-server gets me the following:

"PortBindings": { "8188/tcp": [ { "HostIp": "", "HostPort": "8188" } ] },

"NetworkSettings": { "Bridge": "", "SandboxID": "ab4923359688e2026378bf9b706a64ba633da1d18390c58bedffc05e20f92f13", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": {}, "SandboxKey": "/var/run/docker/netns/default", "SecondaryIPAddresses": null, "SecondaryIPv6Addresses": null, "EndpointID": "", "Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "MacAddress": "", "Networks": { "host": { "IPAMConfig": null, "Links": null, "Aliases": null, "NetworkID": "93dfb9ed5fe9356a5d166928fb5510c8b9e4b75540ee3025573192ab4ce11fdc", "EndpointID": "8e0027fcda9b73cf3595e68ada6ee85650d35b976410c5e5f8800ebb0a7b3219", "Gateway": "", "IPAddress": "", "IPPrefixLen": 0, "IPv6Gateway": "", "GlobalIPv6Address": "", "GlobalIPv6PrefixLen": 0, "MacAddress": "", "DriverOpts": null } } }

gmbhneo commented 9 months ago

I've tried to remove the --network=host command, but now I get an "ERR_CONNECTION_RESET" error on my browser.

grafik But at least I got an internal IP now.

Still, I can't access the webui.

simonlui commented 9 months ago

It may be the case you need to add --listen to your ComfyUI arguments given you are running through WSL2 and I have no clue how they pass through the network configuration from the host. You can remove --network=host as that is clearly not helping you like in a native Linux environment.

gmbhneo commented 8 months ago

It is working now, however, rendering an image gets me this error:

grafik

grafik

simonlui commented 8 months ago

That's a driver crash. Weren't you using the latest Windows driver? The workflow you are using is the default one, right?

If you can, do a basic test of your system and if IPEX works by running the following docker command:

podman run -it --rm --device /dev/dri --network=host --entrypoint bash ipex-arc-comfy:latest

And then run the following once the container starts.

python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel-extension-for-pytorch==2.0.110+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
python -c "import torch;import intel_extension_for_pytorch as ipex;print(ipex.xpu.get_device_name(0))"

You should see something like the following one-line output when run.

root@ubuntu:/$ python -c "import torch;import intel_extension_for_pytorch as ipex;print(ipex.xpu.get_device_name(0))"
/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Intel(R) Arc(TM) A770 Graphics

If you see anything else, please post it.

simonlui commented 8 months ago

Can you confirm if your issue has been fixed? Otherwise, I will be closing the issue soon.

gmbhneo commented 8 months ago

I am sorry, I had to change my GPU since the intel card was giving me bluescreens and stuff. So I am not able to test this anymore :(

simonlui commented 8 months ago

Sorry to hear that. But I am pretty sure it would be working without the issue you encountered with the card. Closing since that seems to be the case.

midyoyo commented 6 months ago

I encountered a similar error in Windows Powershell (and also the same with WSL/Ubuntu) after trying to run the docker command like these two errors:

Also, running the docker contaienr itself without any command trough docker windows application, I get the following error:

2023-11-28 08:52:23 No command to use ipexrun to launch ComfyUI. Launching normally. 2023-11-28 08:52:24 /deps/venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpegorlibpnginstalled before buildingtorchvision from source? 2023-11-28 08:52:24 warn( 2023-11-28 08:52:24 Traceback (most recent call last): 2023-11-28 08:52:24 File "/ComfyUI/main.py", line 72, in <module> 2023-11-28 08:52:24 import execution 2023-11-28 08:52:24 File "/ComfyUI/execution.py", line 12, in <module> 2023-11-28 08:52:24 import nodes 2023-11-28 08:52:24 File "/ComfyUI/nodes.py", line 20, in <module> 2023-11-28 08:52:24 import comfy.diffusers_load 2023-11-28 08:52:24 File "/ComfyUI/comfy/diffusers_load.py", line 4, in <module> 2023-11-28 08:52:24 import comfy.sd 2023-11-28 08:52:24 File "/ComfyUI/comfy/sd.py", line 5, in <module> 2023-11-28 08:52:24 from comfy import model_management 2023-11-28 08:52:24 File "/ComfyUI/comfy/model_management.py", line 114, in <module> 2023-11-28 08:52:24 total_vram = get_total_memory(get_torch_device()) / (1024 * 1024) 2023-11-28 08:52:24 File "/ComfyUI/comfy/model_management.py", line 83, in get_torch_device 2023-11-28 08:52:24 return torch.device(torch.cuda.current_device()) 2023-11-28 08:52:24 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/_init.py", line 674, in current_device 2023-11-28 08:52:24 _lazy_init() 2023-11-28 08:52:24 File "/deps/venv/lib/python3.10/site-packages/torch/cuda/init_.py", line 239, in _lazy_init 2023-11-28 08:52:24 raise AssertionError("Torch not compiled with CUDA enabled") 2023-11-28 08:52:24 AssertionError: Torch not compiled with CUDA enabled

That's a driver crash. Weren't you using the latest Windows driver? The workflow you are using is the default one, right?

If you can, do a basic test of your system and if IPEX works by running the following docker command:

podman run -it --rm --device /dev/dri --network=host --entrypoint bash ipex-arc-comfy:latest

And then run the following once the container starts.

python -m pip install torch==2.0.1a0 torchvision==0.15.2a0 intel-extension-for-pytorch==2.0.110+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/ python -c "import torch;import intel_extension_for_pytorch as ipex;print(ipex.xpu.get_device_name(0))"

You should see something like the following one-line output when run.

root@ubuntu:/$ python -c "import torch;import intel_extension_for_pytorch as ipex;print(ipex.xpu.get_device_name(0))" /usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source? warn( Intel(R) Arc(TM) A770 Graphics

If you see anything else, please post it.

This is the error/warning I received after creating the container:

Successfully installed aiohttp-3.9.3 aiosignal-1.3.1 async-timeout-4.0.3 attrs-23.2.0 einops-0.7.0 frozenlist-1.4.1 huggingface-hub-0.20.3 multidict-6.0.5 pyyaml-6.0.1 regex-2023.12.25 safetensors-0.4.2 scipy-1.12.0 tokenizers-0.15.1 torchsde-0.2.6 tqdm-4.66.1 trampoline-0.1.2 transformers-4.37.2 yarl-1.9.4
No command to use ipexrun to launch ComfyUI. Launching normally.
/deps_latest/venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Total VRAM 4018 MB, total RAM 15844 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Set vram state to: LOW_VRAM
Device: xpu
VAE dtype: torch.bfloat16
Using pytorch cross attention
Starting server

To see the GUI go to: http://127.0.0.1:8188

This is how I start/create the docker container through Powershell: docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest

After encountering various problems and trying multiple fixes for several hours, I managed to get the Docker container running. However, when I try to access it through the browser UI, I get an 'ERR_EMPTY_RESPONSE' and see nothing.

Also does the current Dockerfile not work because of a versioning problem of the intel oneapi packages. I used the config of the Dockerfile.latest which happened to work.

I have an A370M 4GB with the latest driver 31.0.101.5194

simonlui commented 6 months ago

I encountered a similar error in Windows Powershell (and also the same with WSL/Ubuntu) after trying to run the docker command like these two errors: This is the error/warning I received after creating the container:

Successfully installed aiohttp-3.9.3 aiosignal-1.3.1 async-timeout-4.0.3 attrs-23.2.0 einops-0.7.0 frozenlist-1.4.1 huggingface-hub-0.20.3 multidict-6.0.5 pyyaml-6.0.1 regex-2023.12.25 safetensors-0.4.2 scipy-1.12.0 tokenizers-0.15.1 torchsde-0.2.6 tqdm-4.66.1 trampoline-0.1.2 transformers-4.37.2 yarl-1.9.4
No command to use ipexrun to launch ComfyUI. Launching normally.
/deps_latest/venv/lib/python3.10/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(
Total VRAM 4018 MB, total RAM 15844 MB
Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --normalvram
Set vram state to: LOW_VRAM
Device: xpu
VAE dtype: torch.bfloat16
Using pytorch cross attention
Starting server

To see the GUI go to: http://127.0.0.1:8188

This is how I start/create the docker container through Powershell: docker run -it --device /dev/dxg -e ComfyArgs="--highvram" --name comfy-server -p 8188:8188 -v /usr/lib/wsl:/usr/lib/wsl -v ComfyUI:/ComfyUI:Z -v deps:/deps -v huggingface:/root/.cache/huggingface ipex-arc-comfy:latest

After encountering various problems and trying multiple fixes for several hours, I managed to get the Docker container running. However, when I try to access it through the browser UI, I get an 'ERR_EMPTY_RESPONSE' and see nothing.

Also does the current Dockerfile not work because of a versioning problem of the intel oneapi packages. I used the config of the Dockerfile.latest which happened to work.

I have an A370M 4GB with the latest driver 31.0.101.5194

Sorry for the late reply, I've been busy with things in my life and at work. I think your issue is the fact you don't have --listen in your arguments for ComfyUI, it won't pick up ComfyUI's port you are broadcasting from. It's also not wise to blindly copy arguments, you should not be using --highvram with only 4GB of VRAM. And lastly, you are not the intended audience for the Dockerfile, it should only be used on Windows in very niche technical scenarios that would involve custom extensions that would require Linux with a Windows operating system. You are much better off trying to get a native version of ComfyUI working. I have linked instructions in my Readme here but keep in mind, the latest IPEX does not work so you need to use v2.0.120+xpu. Good luck.