-
Thank you for making code publicly available.
Here are the fix in the code for multi-prompt video generation:
1. Add the following line in the argparse section
parser.add_argument("--multipro…
-
What a fantastic great work!
When will the Video Generation model be released?
-
Hello,
We are making a synthetic data generator and would like to contribute data to the project:
https://simian.mov/
https://raccoonresearch.github.io/Simian/
For example:
*Wide shot of Tyra…
-
Hi, can this model generate full-body videos? For example, given prompts like "A man is walking/running/waving hands", "A woman is turning around"... and then generate related videos.
-
I would like to v2v by your model.
I think we need to add two points on opensora/sample/pipeline_videogen.py.
1. Create the encode_videos function like the follow:
```python
def encode_video…
-
Greetings,
I am Mohammad Tabish Shamim, an MSc Artificial Intelligence student at the University of Southampton.
For my MSc dissertation, I am researching on zero-shot text-to-video and I have b…
-
How can I a get a longer video? I can only generate 2s video using script.
-
Hi @LinB203 , just want to bring [VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis](https://arxiv.org/pdf/2403.13501.pdf) to your attention, where the temporal attention mechanism…
-
When I use the Load Video node to generate images and then use the ic-light node for light generation, I find that only one image is generated in the end, what do I need to tweak the parameters so tha…
-
Hi,
Thank you for the great work! Also, your analysis of the performance differences between DynamiCrafter and SVD backbones for Latte is very insightful.
I'd be interested to learn more about h…