Closed aiXander closed 8 months ago
after many tests without animatediff and in sequential order, I can assure you this isn't an issue with all my nodes. This would be better as a bool option for the batch schedulers only.
the actual fix has been added in pr #64 . Thanks for your work, it helped during the size issue errors for a time.
When the input prompt is too long it results in two clip_encoded token vectors of len 77 instead of just one. This causes the
final_conditioning = torch.cat(cond_out, dim=0)
line to crash when other prompts only have one [77,768] token vector.This pr simply detects when this happens and drops the second encoded part of the input prompt (this effectively ignores the end of the prompt that's too long).
This fixes this bug