google-deepmind / graphcast

Apache License 2.0
4.36k stars 537 forks source link

about the autoregressive finetuning #68

Open zhonglidadi opened 2 months ago

zhonglidadi commented 2 months ago

I trained a very good one-step model, but when I expanded the model's autoregressive steps from 2 to 12, the loss for steps 2-12 hardly converged. For example, when training a model with step=2, the loss for the first step remained around 0.008, and the loss for the second step remained around 0.014. When I extended it to 10 steps, the loss for the first step still remained around 0.008, and the loss for the second step remained around 0.014, with no change. This leads to inaccurate predictions in the medium to long term. What could be the potential issue?