KimMeen / Time-LLM

[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
https://arxiv.org/abs/2310.01728
Apache License 2.0
1.02k stars 179 forks source link

请问是否出现过训练时验证集的loss和训练和测试集的loss差异较大的情况? #119

Closed zsczszsc closed 5 days ago

zsczszsc commented 1 week ago

如图,这是某一次训练的情况,其他的情况与这个类似,loss函数为mse Train Loss: 0.3769871 Vali Loss: 0.7908717 Test Loss: 0.3968912 MAE Loss: 0.4176204 非常感谢您的工作!

kwuking commented 5 days ago

感谢您对于我们工作的关注,目前valid loss用于early stop策略使用,并不参与实际的训练工程,您可以尝试调整模型参数再进行下尝试。