nihaomiao / CVPR23_LFDM

The pytorch implementation of our CVPR 2023 paper "Conditional Image-to-Video Generation with Latent Flow Diffusion Models"
BSD 2-Clause "Simplified" License
432 stars 43 forks source link

The model file could not be found #21

Closed zhtjtcz closed 11 months ago

zhtjtcz commented 11 months ago

When I try to tun python -u demo/demo_mug.py, I encountered the following error:

no checkpoint found at '/data/hfn5052/text2motion/videoflowdiff_mug/snapshots-j-sl-random-of-tr-rmm/flowdiff_0005_S111600.pth'

I notice that there are two model paths in the code, RESTORE_FROM and AE_RESTORE_FROM. I gave the path about the pre-training model on the MUG dataset to AE_RESTORE_FROM, but what should the RESTORE_FROM value be?

Also want to ask, code about when will be updated a version? Now many variables in this version are written to the code, to modify and run a lot of inconvenience, if can be integrated into the YAML configuration file, I think it would be a great benefit for the rest of the community to follow your work.

Finally, thank you for this work, gave us a lot of inspiring ideas, also hope to see this project can get more attention.

nihaomiao commented 11 months ago

Hi, @zhtjtcz, thanks a lot for your interest and suggestion for our work! Sorry for the unclear variable names. RESTORE_FROM should be the path to the pre-trained DM model and AE_RESTORE_FROM should be the path to the pre-trained LFAE model. You can download them from the table in our Readme and set the paths to downloaded models.

The YAML configuration is more common, but I personally prefer to put all the variables at the top of the main program to ease the debugging. But thanks again for your suggestion! I can try so in the future. Please feel free to let me know if you have any questions or comments!