Haidra-Org / AI-Horde-Worker

This repo turns your PC into a AI Horde worker node
GNU Affero General Public License v3.0
238 stars 63 forks source link

Make CUDA optional - not all platforms with GPUs support Nvidia. #182

Open axemaster opened 1 year ago

axemaster commented 1 year ago

Could not find a similar issue but this is a show stopper for Mac (M2 Pro etc.) Short story: Apple will never support Nvidia again.

The A1111 unofficial worker extension does not have this dependency but broke yesterday for unknown reasons, so I tried to install the official worker. I cannot because of this dependency on Nvidia CUDA toolkit.

Mac is fast enough to contribute to the horde, I have somewhere north of 500k kudos, but right now cannot generate more images. Help?

db0 commented 1 year ago

Unfortunately we don't have a macos developer or a macos to develop in. Do you know how to make this work yourself?

axemaster commented 1 year ago

Unfortunately we don't have a macos developer or a macos to develop in. Do you know how to make this work yourself?

A1111 uses PyTorch, the Mac version natively supports Mac GPUs. When I run an image my GPUs get pegged. So I think my request is an option to use PyTorch without requiring CUDA.

axemaster commented 1 year ago

The A1111 unofficial worker extension does not have this dependency but broke yesterday for unknown reasons

It's working now without any comments on any of the issues I opened on it. Assuming a horde server-side API fix as they apparently broke... something. Still, I'd rather run the official worker so I'm leaving this open.

axemaster commented 1 year ago

The 5-11 announcement implies a new (official?) worker is coming. I hope it does not require CUDA!

db0 commented 1 year ago

You can try it out on the comfy branch

tazlin commented 1 year ago

I have seen a couple of requests to use AMD cards.

FredHappyface commented 9 months ago

Hey thank you for this awesome project! I've recently been looking into running the alchemist worker locally. I'm curious on what the blocker for supporting other architectures is? Is this with pytorch? (I can see that there is a CPU only mode but installing this - just by updating the requirements.txt) results in a big sad

Logs ``` AssertionError: Torch not compiled with CUDA enabled Exception in thread Thread-7 (_reload_models): Traceback (most recent call last): File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\threading.py", line 1009, in _bootstrap_inner self.run() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\threading.py", line 946, in run self._target(*self._args, **self._kwargs) File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\loguru\_logger.py", line 1277, in catch_wrapper return function(*args, **kwargs) File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\worker\bridge_data\framework.py", line 218, in _reload_models success = model_manager.load(model) File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\hyper.py", line 417, in load return model_manager.load( File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\base.py", line 319, in load self.ensure_ram_available() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\model_manager\base.py", line 234, in ensure_ram_available vram_headroom = get_torch_free_vram_mb() - UserSettings.get_vram_to_leave_free_mb() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\comfy_horde.py", line 176, in get_torch_free_vram_mb return round(_comfy_get_free_memory() / (1024 * 1024)) File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\_comfyui\comfy\model_management.py", line 380, in get_free_memory dev = get_torch_device() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\hordelib\_comfyui\comfy\model_management.py", line 141, in get_torch_device return torch.cuda.current_device() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\torch\cuda\__init__.py", line 674, in current_device _lazy_init() File "C:\Users\Dell\Documents\GitHub\AI-Horde-Worker\conda\envs\windows\lib\site-packages\torch\cuda\__init__.py", line 239, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled ```