Open nighting0le01 opened 3 months ago
The reproduction code is not really in a place where we can take a look at it and provider you with feedback. Please format it properly and let us know.
And also, provide more information on your system, etc.
hi @sayakpaul apologies for the formating, can you PTAL now? prompts_batched is a list of len 32
The reproduction code should be more minimal. Could we eliminate the class structure and have a more simplistic reproduction code?
FWIW if you do inference just a single image and it works ok and this happens when you do batch inference, probably you're hitting the limit of your VRAM.
Also it's kind of a given that batches will take more time, so probably what you will need to give us also is:
Right now we don't know any of this and we can't really help you, you're providing a code with a lot of unknowns.
hi @asomoza , i checked it never hits the vram limit, i'm using simple sd1.5, it just gets stuck
@asomoza @sayakpaul the reason of providing the dataset and dataloader is to show how the data is read from a text file and batched before feeding into a SD1.5 pipeline
IMO that can probably be just replicated with something much simpler:
prompt = ["a dog"]
batched_prompts = [prompt] * batch_size
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
batched diffusers pipeline inference is really slow
Reproduction
Logs
No response
System Info
diffusers 0.27, torch Version: 2.0.1
Who can help?
No response