Open heng94 opened 1 year ago
Did you also check by changing max_encoder_length. I mean reduce the max_encoder length. sometimes, that helped in reducing the overfitting. Just FYI, Even I get similar plots as you have shown. The validation loss increases drastically after few epochs.
@sairamtvv Thanks for your reply! Setting max_encoder_length as 7*24 is the way in the papers I read. I found it's a typical setting for electricity consumption prediction. I am not sure whether it works or not. I will try to reduce it and then show the results here. Thank you!
Setting max_encoder_length as 24 does not work either. Here are the plots Still need to find other ways.
In the plot you showed above, atleast the validation loss has gone down from 0.85 to 0.55. Pls correct me incase I am wrong
Yes, it rises up again. And the result is still not good.
Any way to set the sliding window size to more than 1?
such as 1-192, 25-217, 49-241
Thank you!
@heng94 did you realize what could be the problem
Problem statement
I am using the TFT model to predict electricity consumption. But during the training process, the training loss reduces while the val loss goes up, which means the model overfits.
I tried to reduce the learning rate, batch size, hidden size, hidden_continuous_size, and some other methods to solve the overfitting problem, but it seemed not to work.
Can anyone give me some suggestions?
Thank you very much!
Here are the codes: