ai-forever / Kandinsky-3

https://ai-forever.github.io/Kandinsky-3/
Apache License 2.0
290 stars 27 forks source link

How to use get_T2I_Flash_pipeline with kandinsky3.1 weights to inference in a single 40G A100 GPU or multiple gpus? #21

Open EYcab opened 2 months ago

EYcab commented 2 months ago

It seems your get_T2I_Flash_pipeline requires more than 40G memory in a A100 GPU when trying to inference.However,when I tried to load the model in 2*40G A100 GPU,it seems your codes has not supported it yet?

Could you provide a way to load this model with multiple gpus?