vincent-leguen / PhyDNet

Code for our CVPR 2020 paper "Disentangling Physical Dynamics from Unknown Factors for UnsupervisedVideo Prediction"
MIT License
183 stars 50 forks source link

MSE obtained far higher from reported results #21

Open eugenelet opened 3 years ago

eugenelet commented 3 years ago

Hi Authors,

I ran the code using the default configuration on Moving MNIST by directly executing python3 main.py.

The final MSE obtained after 1000 epochs is around 75.26 which is far higher than the MSE reported in the paper which is 24.4. Is there anything that I'm missing here? Thanks! image

Eugene

vincent-leguen commented 3 years ago

Hi @eugenelet , for improving reproducibility, I have uploaded an improved version of PhyDNet, with separate encoders and decoders (more details in the paper at the CVPR OmniCV workshop 2020: https://openaccess.thecvf.com/content_CVPRW_2020/papers/w38/Le_Guen_A_Deep_Physical_Model_for_Solar_Irradiance_Forecasting_With_Fisheye_CVPRW_2020_paper.pdf). I have also uploaded the pretrained model, which attains MSE=24,19 (better than in the CVPR paper). In particular, we found that the batch size has a crucial impact on performances, we fixed it at 16 for this model. Best, Vincent

eugenelet commented 3 years ago

Hi @vincent-leguen , thanks for releasing the pretrained model and the updated configuration of the code. I'll re-run the code from scratch to validate the reported performance. This is an interesting field to contribute to.

Eugene

eugenelet commented 3 years ago

Hi @vincent-leguen , for the recently updated code, do I run using the default configs, i.e. python3 main.py? At epoch 1000, I obtained MSE of 38.62 which is still off from the reported results.

toddwyl commented 3 years ago

Hi @vincent-leguen , for the recently updated code, do I run using the default configs, i.e. python3 main.py? At epoch 1000, I obtained MSE of 38.62 which is still off from the reported results.

I find their model is sensitive to batch size. U should make sure your batch size is 16. I think maybe the GroupNorm cause it. When I set the batch size with 16, it work well.