-
It's great that we can generate our own video. The problem now is time. I have a Ryzen 5 3600x 12 cores, 32gb memory and gpu 4 gb GTX 1050 ti. Almost 5 hours to generate a video with the example setti…
-
Hi!
I have read through README and found the requirements for GPU are not specified. Is there a minimum amount of graphic memory for the GPU? Also, how much time would it take to get the results fo…
-
Thanks for your contribution, can you tell me what GPU I use for training?
-
Are there any plans for adding GPU support to verdict? My GPU accelerated code is hitting a big slowdown when I have to switch to host only execution to call some verdict functions. It looks to me as …
-
The current script can only run on Mac if config.computeUnits = .cpuOnly is set.
How can I change it to support GPU?
Thank you!
-
Hello!
I'm glad I accidentally stumbled upon your project! It turned out great!
However, I have a question...
Everything was installed according to the instructions. Windows 11, WSL2, Debian, D…
-
Why are you using only a single GPU?
If you use DistributedDataParallel or DataParallel, does it slow down?
-
Hello,
Thank you for your good research.
I have a question about the inference process.
Currently, I am testing in two ways.
1. Huggingface inference
2. HiT-SRF-2x.pth pretrained inference
…
-
Is there a way to force which GPU CPAI runs on? I've used CUDA_VISIBLE_DEVICES in Windows but it doesn't seem to have any effect (models appear to run on the GPU I wish to exclude). Additionally, does…
-
File "C:\Users\jsand\OmniGen\OmniGen\scheduler.py", line 16, in __init__
raise RuntimeError("OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, wh…