Open luo3300612 opened 4 months ago
Fixed that. Use --extras 1
to advoid it.
https://github.com/PKU-YuanGroup/Open-Sora-Plan/blob/main/opensora/models/diffusion/dit/dit.py#L239
It seems to ignore the attention_mask used in DiT forward function.
File "/mnt/workspace/Text-to-Video/Open-Sora-Plan/opensora/models/diffusion/diffusion/respace.py", line 130, in __call__
return self.model(x, new_ts, **kwargs)
TypeError: forward() missing 1 required positional argument: 'attention_mask'
It seems to ignore the attention_mask used in DiT forward function.
File "/mnt/workspace/Text-to-Video/Open-Sora-Plan/opensora/models/diffusion/diffusion/respace.py", line 130, in __call__ return self.model(x, new_ts, **kwargs) TypeError: forward() missing 1 required positional argument: 'attention_mask'
Fixed that.
if you have inference results, how is their quality, can you show some cases?
if you have inference results, how is their quality, can you show some cases?
See https://github.com/PKU-YuanGroup/Open-Sora-Plan/tree/main?tab=readme-ov-file#sampling
Two days ago, I train a Dit-XL with the following command:
Today, I try to sample a video through:
However, I met
Thank you for taking the time to look into this issue. I look forward to your response.