Open jmwang0117 opened 10 months ago
Hello, thank you for your excellent work.
Does text-to-multi-view inference(demo.py) require 4x A6000 GPU to complete?
I'm using a 3090 GPU and inference occurs with CUDA out of memory.
Can you try fp16? You can add fp=16 here
Hello, thank you for your excellent work.
Does text-to-multi-view inference(demo.py) require 4x A6000 GPU to complete?
I'm using a 3090 GPU and inference occurs with CUDA out of memory.