Open WuZhongQing opened 1 month ago
It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.
It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.
thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?
and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?
thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?
We've done a lot to save GPU memory. You may check the details of our implementation.
and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?
Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.
thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?
We've done a lot to save GPU memory. You may check the details of our implementation.
and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?
Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.
thanks a lot.
It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.
and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?
and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?
They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.
and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?
They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.
thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?
thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?
I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.
thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?
I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.
thank you very much
thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?
I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.
thank you very much
Could you please offer me some help about those inferences tricks, or give some link, i want to try ~
This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.
Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.
Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.
that ok ~, thanks for your reply, and look forward to your new search~
This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.
hi, thanks for your open source again. i just find there has no difference between 16 frames yaml and 61 frames yaml except sc_attn_index, so i'm wondering that if i can training 240 frames just change model sc_attn_index ? Looking forward to your reply ~