cure-lab / MagicDrive

[ICLR24] Official implementation of the paper “MagicDrive: Street View Generation with Diverse 3D Geometry Control”
https://gaoruiyuan.com/magicdrive/
GNU Affero General Public License v3.0
667 stars 40 forks source link

about training 240 frames~ #97

Open WuZhongQing opened 1 month ago

WuZhongQing commented 1 month ago

hi, thanks for your open source again. i just find there has no difference between 16 frames yaml and 61 frames yaml except sc_attn_index, so i'm wondering that if i can training 240 frames just change model sc_attn_index ? Looking forward to your reply ~

flymin commented 1 month ago

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

WuZhongQing commented 1 month ago

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

WuZhongQing commented 1 month ago

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

flymin commented 1 month ago

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

We've done a lot to save GPU memory. You may check the details of our implementation.

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.

WuZhongQing commented 1 month ago

thanks for your reply, and have you ever think about reduce the requirement of GPU memory ?

We've done a lot to save GPU memory. You may check the details of our implementation.

and another question is did you ever think about use SVD to generate video ? is the quality of SVD generated video not good enough ?

Currently, you can refer to Vista, which is based on SVD but without fine-grained controllability. In our new work, we will discuss the related problem. The new paper will come out soon. Stay tuned.

thanks a lot.

WuZhongQing commented 1 month ago

It is possible. Actually, we are limited by GPU memory (80G A800), so we only train up to 60 frames.

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

flymin commented 1 month ago

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.

WuZhongQing commented 1 month ago

and can i ask what's the difference between video generate and image generate ? just increase the number of batch size ?

They are fundamentally different. Images are 2D, but videos are 3D (with temporal dim). From the resources' perspective, one simple example is that, many high-res image generations only support training with batch size=1. The training/inference consumption of video can easily explode, and the model needs to gain more capability.

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

flymin commented 1 month ago

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

WuZhongQing commented 1 month ago

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

thank you very much

WuZhongQing commented 1 month ago

thanks for your answer, and i found that you didn't use any image to generate latents (you just use bev) to generate video, and my questions is that how about use 8 images + 8 random latents to generate 16 frames video, can this help to increase the continuity in time to generate long video ?

I think you want to ask about future frame prediction. This can be thought of as a downstream task of the video generation model. There are some inference tricks to do so, similar to image inpainting. Anyway, it relies on the ability of the video generation model.

thank you very much

Could you please offer me some help about those inferences tricks, or give some link, i want to try ~

github-actions[bot] commented 3 weeks ago

This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.

flymin commented 1 week ago

Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.

WuZhongQing commented 1 week ago

Sorry, I cannot provide that because I did not try any personally. I think a quick search could give you the answer.

that ok ~, thanks for your reply, and look forward to your new search~

github-actions[bot] commented 7 hours ago

This issue is stale because it has been open for 7 days with no activity. If you do not have any follow-ups, the issue will be closed soon.