Open gaozhangyang opened 3 years ago
Hey, i tried this today after reading your post and indeed it works almost identically. Sometimes bettter, sometimes worst. You disable moment regularization or another clever trick?
I tried the model without PhyDcell (single LSTMcell). It gave out similar results (with PhyDcell+LSTMcell). So I wonder that I set something wrong?
To make a fair comparison between LSTM + LSTM and PHY + LSTM, I disable the moment regularization trick and left everything else the same as the original code.
Thank you for you work! After reading your paper, I did ablation studies with different time block combinations: lstm+lstm, phy+lstm and phy+phy. When keeping the training epoch as the same, the order of performance on moving-mnist from high to low is: lstm+lstm > phy+lstm > phy+phy: ################################################################################## lstm+lstm(500 epochs):eval mse 32.77762456483479 eval mae 89.78828951678699 eval ssim 0.9261694075639623 eval bce 381.8145131557803
phy+lstm(500 epochs):eval mse 37.04977672914915 eval mae 100.31283704540398 eval ssim 0.9129215236492586 eval bce 410.4891109804565
phy+phy(500 epochs): eval mse 40.79421605943124 eval mae 107.12226124051251 eval ssim 0.9043235692587113 eval bce 433.5146431838409
################################################################################## lstm+lstm(1000 epochs, training for 10d 9h):eval mse 31.734661367875113 eval mae 87.48544041114518 eval ssim 0.9288795915640016 eval bce 375.4039376560644
phy+lstm(1000 epochs, training for 8d 8h):eval mse 36.324036610277396 eval mae 98.47413142723373 eval ssim 0.9151237767597262 eval bce 405.9187741050238
phy+phy(1000 epochs, training for 8d 12h): eval mse 40.345165107823625 eval mae 106.5546834438662 eval ssim 0.9050380287013375 eval bce 430.59628875346124
where batch_size=64, lr=0.0001. I mean no harm by merely reporting the results of the experiment. Thanks.