Open phalexo opened 1 year ago
did you try it?
No, to try it I have to provide my info to download the checkpoints. If the above is not possible with the current code, then I don't want to register for the checkpoints.
Seems to work.
Before:
After:
Seems to work.
Can I assume, you were also able to get an image out of it? Since they are using "accelerate" library already, I hoped it would work.
Thanks for the info.
Yes I can now confirm I was able to get images out of it with multi GPUs.
Also you can change
t5 = T5Embedder(device="cpu")
to be a GPU like
t5 = T5Embedder(device="cuda:0")
Yes I can now confirm I was able to get images out of it with multi GPUs.
Also you can change
t5 = T5Embedder(device="cpu")
to be a GPU liket5 = T5Embedder(device="cuda:0")
Did you do anything additional? I am running into an issue that cuBLAS is not supported when t5 is called. I have installed cuda toolset 11.8 after 11.3 and it still gives me the same error.
Did the images display correctly?
Can you put up some info about your environment? Python module versions, kernel, etc...
Thanks.
Hi all, we made an integration of BentoML with IF. It support starting local server with a simple way to assign stages to arbitrary GPU available to the system. You can also build docker image and deploy it to production environment using bentoctl or BentoCloud.
from deepfloyd_if.modules import IFStageI, IFStageII, StableStageIII from deepfloyd_if.modules.t5 import T5Embedder
I have 4 Maxwel Titan X with 12GB VRAM each.
if_I = IFStageI('IF-I-XL-v1.0', device='cuda:0') if_II = IFStageII('IF-II-L-v1.0', device='cuda:1') if_III = StableStageIII('stable-diffusion-x4-upscaler', device='cuda:2') t5 = T5Embedder(device="cuda:3")