Closed guetaro closed 1 year ago
Hi, would you please detail the legend? I will take the orange line as prediction and the blue line as ground truth.
Over-smoothing is a shared problem of deep models. Maybe you can try some sharpness-related loss functions or MSE in the frequency domain, in addition to the MSE used in Autoformer.
Sorry for that, the blue is the prediction and the orange is the groud truth actually. I avaraged the results for every window (since there are windows of 96 samples each). After avaraging the results seems better Hope its a valid way
I have other question to ask: I wanted to use the algorithm to handle with many short examples instad of one long one (divide the long time seires to many small parts of few hundreds samples), I had to change the data loader but managed to do so. the rsults in the other hand are not good as the used to be.
Do you have an idea why? (maybe the structure of the algorithm or the attention mechnism that cannot take in accunt previous examples, etc.)
thank again
I think this is because the encoder-decoder framework is too redundant for short-term forecasting. Maybe you can replace the decoder as a simple MLP along the temporal dimension.
Or you may find our latest work TimesNet (https://github.com/thuml/TimesNet) useful, which is also evaluated on M4.
Thanks again for the help.
I maybe worng but I can't find the code itself in the repository you shared (only the readme and images). When the code itself will be uploaded?
I'll consider replacing the decoder with other layers.
Thanks
You may need to read the first sentence of the README.md carefully.
Hi,
Thanks for this Git, great work!
When using the git i recive noisy results (for example, the ETTm dataset):
As you can see the prediction is noisy while the ground truth isn't so much. I would like to know if it's the expected behavior or such a system or I'm doing somthing wrong (the plot is done using pred.flatten() and true.flatten()). The numeric MSE results are fine.
Thank you in advance.