Open lsflyt-pku opened 12 hours ago
Then I compare the configs/inference_1024_v1.0.yaml with the inference_512_v1.0.yaml in HuggingFace space, and find the perframe_ae is set True in 1024 and False in 512. Since I have enough GPU memory, I set perframe_ae=False in 1024 yaml and find it can successfully generate videos.
Thank you for your excellent work. I tried the bash and Gradio files in this repository, but found that the model generates a pure black video at 1024 resolution, as mentioned in issue #112. Furthermore, I found that the Gradio app.py in the HuggingFace space can successfully generate videos. However, when we run it locally with the checkpoint downloaded from HuggingFace, it still generates black videos. Are the checkpoint of the HuggingFace space inconsistent with the public (https://huggingface.co/Doubiiu/DynamiCrafter_1024/blob/main/model.ckpt)?