VectorSpaceLab / OmniGen

OmniGen: Unified Image Generation. https://arxiv.org/pdf/2409.11340
MIT License
2.88k stars 226 forks source link

Errors when running on Macos Sonoma - M1 RuntimeError: OffloadedCache can only be used with a GPU #55

Open adamreading opened 3 weeks ago

adamreading commented 3 weeks ago

M1 Silicon Macos - 16Gb Ram - Sonoma 14.6.1 (23G93) Installed with Pinokio No obvious Install errors (zip attached)

I have tried every combination of checkboxes and I always get the same error that RuntimeError: OffloadedCache can only be used with a GPU, no matter what that is set to.

###################################################################### #

group: /Users/Shared/pinokio/api/omnigen.git/start.js

id: 5a382145-df88-445b-a9f4-55a2a2dc5ca9

index: 1

newlogs.zip

cmd: eval "$(conda shell.bash hook)" && conda deactivate && conda deactivate && conda deactivate && conda activate base && source /Users/Shared/pinokio/api/omnigen.git/app/env/bin/activate /Users/Shared/pinokio/api/omnigen.git/app/env && python app.py

timestamp: 10/29/2024, 6:12:23 PM (1730225543006)

The default interactive shell is now zsh. To update your account to use zsh, please run chsh -s /bin/zsh. For more details, please visit https://support.apple.com/kb/HT208050. <>eval "$(conda shell.bash hook)" && conda deactivate && conda deactivate && conda deactivate && conda activate base && source /Users/Shared/pinokio/api/omnigen.git/app/env/bin/activate /Users/Shared/pinokio/api/omnigen.git/app/env && python app.py Fetching 10 files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 61771.78it/s] Loading safetensors

To create a public link, set share=True in launch(). Traceback (most recent call last): File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/gradio/queueing.py", line 624, in process_events response = await route_utils.call_process_api( File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api output = await app.get_blocks().process_api( File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/gradio/blocks.py", line 2018, in process_api result = await self.call_function( File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/gradio/blocks.py", line 1567, in call_function prediction = await anyio.to_thread.run_sync( # type: ignore File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread return await future File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 943, in run result = context.run(func, args) File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper response = f(args, *kwargs) File "/Users/Shared/pinokio/api/omnigen.git/app/app.py", line 22, in generate_image output = pipe( File "/Users/Shared/pinokio/api/omnigen.git/app/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/Users/Shared/pinokio/api/omnigen.git/app/OmniGen/pipeline.py", line 280, in call samples = scheduler(latents, func, model_kwargs, use_kv_cache=use_kv_cache, offload_kv_cache=offload_kv_cache) File "/Users/Shared/pinokio/api/omnigen.git/app/OmniGen/scheduler.py", line 156, in call cache = [OmniGenCache(num_tokens_for_img, offload_kvcache) for in range(len(model_kwargs['input_ids']))] if use_kv_cache else None File "/Users/Shared/pinokio/api/omnigen.git/app/OmniGen/scheduler.py", line 156, in cache = [OmniGenCache(num_tokens_for_img, offload_kvcache) for in range(len(model_kwargs['input_ids']))] if use_kv_cache else None File "/Users/Shared/pinokio/api/omnigen.git/app/OmniGen/scheduler.py", line 14, in init raise RuntimeError("OffloadedCache can only be used with a GPU") RuntimeError: OffloadedCache can only be used with a GPU

staoxiao commented 3 weeks ago

@adamreading , the current code doesn't support m1. I tried running it on a Mac with an M2 chip, and it was very slow. I don't recommend running OmniGen on it. It may require specific optimizations.

adamreading commented 3 weeks ago

I’ve seen it’s been released on comfyui node today when I get home I will try that in my cloud server

yukiarimo commented 3 weeks ago

Following

yukiarimo commented 3 weeks ago

The HF demo doesn't work

fah commented 3 weeks ago

@adamreading

comfyui

You meant this?: https://github.com/AIFSH/OmniGen-ComfyUI

sommersohn commented 1 week ago

Having the same errors here on my Macbook Air M1.

adamreading commented 1 week ago

Having the same errors here on my Macbook Air M1.

I ended up running it on a cloud server - way faster and super easy to use - https://mimicpc.com/?fpr=adam47