Open ssyuwang opened 8 months ago
Thank you for the reminder. We have made the necessary modifications in the Imputation file. Alternatively, you can add a parameter ‘--percent 100’. For the DataEmbedding module, we utilize the similar settings of TimesNet, as our primary focus in this work is the frozen LM.
Thank you for your response and have been able to successfully achieve the results as expected. In addition, I would like to ask, like the long term prediction task "instance norm->patching->line probe->gpt model" structure to imputation the data can be to a good complementary result?
please refrain: One of the major advantage of patching in forecasting is that it can increase lookback window. This does not seem to be particularly important in imputation tasks. We have not tried it specifically, maybe you can give it a try.
Hello, GPT2 has positional encoding module, thus only a linear projection is used to project the input time series to the required dimension., but DataEmbedding (TokenEmbedding + PositionalEmbedding + TemporalEmbedding) is still used in the Imputation model 'GPT4TS', may I ask why? In addition, I just started testing on imputation, (full training), horizon=96, layers = 3, result not as expected for example, for missing rate is 0.125, results is imputation_ETTh1_mask_0.125_GPT4TS_ETTh1_ftM_sl96_ll0_pl0_dm768_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_Exp_0
mse:0.2134459912776947, mae:0.2767394185066223
imputation_ETTh1_mask_0.125_GPT4TS_ETTh1_ftM_sl96_ll0_pl0_dm768_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_Exp_1
mse:0.20807239413261414, mae:0.27207669615745544
imputation_ETTh1_mask_0.125_GPT4TS_ETTh1_ftM_sl96_ll0_pl0_dm768_nh8_el2_dl1_df2048_fc1_ebtimeF_dtTrue_Exp_2
mse:0.2085033655166626, mae:0.27654552459716797 this is not as the paper stated(MSE:0.043, MAE:0.140) using the parameters in the latest scripts file, Can you please provide your settings?