Open GallonDeng opened 3 weeks ago
how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?
HI @AllenDun, thank you for your interest in our project.
There is currently no multi GPU implementation. We are working on reducing the memory requirements.
how to run inference on multi GPUs, such as RTX4090, since it needs much more 24G?