Hank0626 / PDF

An official implementation of "Periodicity Decoupling Framework for Long-term Series Forecasting" (ICLR 2024)
GNU Affero General Public License v3.0
86 stars 8 forks source link

代码报错 #7

Open jianlaigeng opened 1 week ago

jianlaigeng commented 1 week ago

D:\anaconda\envs\py38\python.exe H:\study\data2\PDF-main\run_longExp.py Traceback (most recent call last): File "H:\study\data2\PDF-main\run_longExp.py", line 159, in exp = Exp(args) # set experiments File "H:\study\data2\PDF-main\exp\exp_main.py", line 27, in init super(Exp_Main, self).init(args) File "H:\study\data2\PDF-main\exp\exp_basic.py", line 10, in init self.model = self._build_model().to(self.device) File "H:\study\data2\PDF-main\exp\exp_main.py", line 40, in _build_model model = model_dict[self.args.model].Model(self.args).float() File "H:\study\data2\PDF-main\models\PDF.py", line 53, in init self.model = PDF_backbone(c_in=c_in, context_window=context_window, target_window=target_window, File "H:\study\data2\PDF-main\layers\PDF_backbone.py", line 36, in init self.kernel_list = [(n, patch_len[i]) for i, n in enumerate(self.period_len)] File "H:\study\data2\PDF-main\layers\PDF_backbone.py", line 36, in self.kernel_list = [(n, patch_len[i]) for i, n in enumerate(self.period_len)] IndexError: list index out of range Args in experiment: Namespace(activation='gelu', add=False, affine=0, attn_dropout=0.05, batch_size=16, c_out=7, checkpoints='./checkpoints/', d_ff=2048, d_layers=1, d_model=512, data='ETT-small', data_path='ETTh1.csv', dec_in=7, decomposition=0, des='test', devices='0', distil=True, do_predict=False, dropout=0.05, e_layers=2, embed='timeF', embed_type=0, enc_in=7, factor=1, fc_dropout=0.0, features='M', freq='h', gpu=0, head_dropout=0.0, individual=0, is_training=1, itr=2, kernel_list=[3, 7, 9], kernel_size=25, label_len=48, learning_rate=0.0001, log='./logs/LongForecasting/PatchTST_Electricity_336_96.log', loss='mse', lradj='type3', model='PDF', model_id='test', moving_avg=25, n_heads=8, num_workers=10, output_attention=False, padding_patch='end', patch_len=[16], patience=100, pct_start=0.3, period=[24, 12], pred_len=96, random_seed=2021, revin=1, root_path='./dataset/ETT-small/', seq_len=96, serial_conv=False, stride=None, subtract_last=0, target='OT', test_flop=False, train_epochs=100, use_amp=False, use_gpu=True, use_multi_gpu=False, wo_conv=False) Use GPU: cuda:0

您好,代码一直报错,运行不了,麻烦能看一下吗?

Hank0626 commented 1 week ago

你好,请提供下你跑的scripts

jianlaigeng commented 1 week ago

你好,请提供你运行的脚本

您好,非常感谢您的回复,我跑的不是您提供的脚本,是run_longExp.py 这个运行文件,ETT-small/ETTh1.csv 这个数据集

Hank0626 commented 6 days ago

也可以提供一下,是因为有些args输入不规范(长度不一致导致的)

jianlaigeng commented 6 days ago

也可以提供以下选项,因为有些参数输入不规范(长度不一致导致的)

import argparse import os import sys

import torch from exp.exp_main import Exp_Main import random import numpy as np

parser = argparse.ArgumentParser(description='Autoformer & Transformer family for Time Series Forecasting')

random seed

parser.add_argument('--random_seed', type=int, default=2021, help='random seed')

basic config

parser.add_argument('--is_training', type=int, default=1, help='status') parser.add_argument('--model_id', type=str, default='test', help='model id') parser.add_argument('--model', type=str, default='PDF', help='model name, options: [Autoformer, Informer, Transformer]')

data loader

parser.add_argument('--data', type=str, default='ETT-small', help='dataset type') parser.add_argument('--root_path', type=str, default='dataset/ETT-small', help='root path of the data file') parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file') parser.add_argument('--features', type=str, default='M', help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate ' 'predict univariate, MS:multivariate predict univariate') parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task') parser.add_argument('--freq', type=str, default='h', help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, ' 'b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h') parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')

forecasting task

parser.add_argument('--seq_len', type=int, default=96, help='input sequence length') parser.add_argument('--label_len', type=int, default=48, help='start token length') parser.add_argument('--pred_len', type=int, default=96, help='prediction sequence length')

DLinear parser.add_argument('--individual', action='store_true', default=False, help='DLinear: a linear layer for

each variate(channel) individually')

PatchTST

parser.add_argument('--fc_dropout', type=float, default=0.0, help='fully connected dropout') parser.add_argument('--head_dropout', type=float, default=0.0, help='head dropout')

parser.add_argument('--add', action='store_true', default=False, help='add') parser.add_argument('--wo_conv', action='store_true', default=False, help='without convolution') parser.add_argument('--serial_conv', action='store_true', default=False, help='serial convolution')

parser.add_argument('--kernel_list', type=int, nargs='+', default=[3, 7, 9], help='kernel size list') parser.add_argument('--patch_len', type=int, nargs='+', default=[16], help='patch high') parser.add_argument('--period', type=int, nargs='+', default=[24, 12], help='period list') parser.add_argument('--stride', type=int, nargs='+', default=None, help='stride')

parser.add_argument('--padding_patch', default='end', help='None: None; end: padding on the end') parser.add_argument('--revin', type=int, default=1, help='RevIN; True 1 False 0') parser.add_argument('--affine', type=int, default=0, help='RevIN-affine; True 1 False 0') parser.add_argument('--subtract_last', type=int, default=0, help='0: subtract mean; 1: subtract last') parser.add_argument('--decomposition', type=int, default=0, help='decomposition; True 1 False 0') parser.add_argument('--kernel_size', type=int, default=25, help='decomposition-kernel') parser.add_argument('--individual', type=int, default=0, help='individual head; True 1 False 0')

Formers

parser.add_argument('--embed_type', type=int, default=0, help='0: default 1: value patch_embedding + temporal patch_embedding + positional patch_embedding 2: value ' 'patch_embedding + temporal patch_embedding 3: value patch_embedding + positional patch_embedding 4: value patch_embedding') parser.add_argument('--enc_in', type=int, default=7, help='global_encoder input size') # DLinear with --individual, use this hyperparameter as the number of

channels

parser.add_argument('--dec_in', type=int, default=7, help='decoder input size') parser.add_argument('--c_out', type=int, default=7, help='output size') parser.add_argument('--d_model', type=int, default=512, help='dimension of model') parser.add_argument('--n_heads', type=int, default=8, help='num of heads') parser.add_argument('--e_layers', type=int, default=2, help='num of global_encoder layers') parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers') parser.add_argument('--d_ff', type=int, default=2048, help='dimension of fcn') parser.add_argument('--moving_avg', type=int, default=25, help='window size of moving average') parser.add_argument('--factor', type=int, default=1, help='attn factor') parser.add_argument('--distil', action='store_false', help='whether to use distilling in global_encoder, using this argument means not using distilling', default=True) parser.add_argument('--dropout', type=float, default=0.05, help='dropout') parser.add_argument('--attn_dropout', type=float, default=0.05, help='attention dropout') parser.add_argument('--embed', type=str, default='timeF', help='time features encoding, options:[timeF, fixed, learned]') parser.add_argument('--activation', type=str, default='gelu', help='activation') parser.add_argument('--output_attention', action='store_true', help='whether to output attention in ecoder') parser.add_argument('--do_predict', action='store_true', help='whether to predict unseen future data')

optimization

parser.add_argument('--num_workers', type=int, default=10, help='data loader num workers') parser.add_argument('--itr', type=int, default=2, help='experiments times') parser.add_argument('--train_epochs', type=int, default=100, help='train epochs') parser.add_argument('--batch_size', type=int, default=16, help='batch size of train input data') parser.add_argument('--patience', type=int, default=100, help='early stopping patience') parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate') parser.add_argument('--des', type=str, default='test', help='exp description') parser.add_argument('--loss', type=str, default='mse', help='loss function') parser.add_argument('--lradj', type=str, default='type3', help='adjust learning rate') parser.add_argument('--pct_start', type=float, default=0.3, help='pct_start') parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)

GPU

parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu') parser.add_argument('--gpu', type=int, default=0, help='gpu') parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False) parser.add_argument('--devices', type=str, default='0', help='device ids of multile gpus') parser.add_argument('--test_flop', action='store_true', default=False, help='See utils/tools for usage')

output log file

parser.add_argument('--log', type=str, default='./logs/LongForecasting/PatchTST_Electricity_336_96.log', help='path of output log file')

args = parser.parse_args()

output

sys.stdout = open(args.log, 'w')

random seed

fix_seed = args.random_seed random.seed(fix_seed) torch.manual_seed(fix_seed) np.random.seed(fix_seed)

args.use_gpu = True if torch.cuda.is_available() and args.use_gpu else False

if args.use_gpu and args.use_multi_gpu: args.dvices = args.devices.replace(' ', '') device_ids = args.devices.split(',') args.deviceids = [int(id) for id_ in device_ids] args.gpu = args.device_ids[0]

print('Args in experiment:') print(args)

Exp = Exp_Main

if args.is_training: for ii in range(args.itr):

setting record of experiments

    setting = '{}_{}_{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}_dt{}_{}_{}'.format(
        args.model_id,
        args.model,
        args.data,
        args.features,
        args.seq_len,
        args.label_len,
        args.pred_len,
        args.d_model,
        args.n_heads,
        args.e_layers,
        args.d_layers,
        args.d_ff,
        args.factor,
        args.embed,
        args.distil,
        args.des, ii)

    exp = Exp(args)  # set experiments
    print('>>>>>>>start training : {}>>>>>>>>>>>>>>>>>>>>>>>>>>'.format(setting))
    exp.train(setting)

    print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
    exp.test(setting)

    if args.do_predict:
        print('>>>>>>>predicting : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
        exp.predict(setting, True)

    torch.cuda.empty_cache()

else: ii = 0 setting = '{}{}{}_ft{}_sl{}_ll{}_pl{}_dm{}_nh{}_el{}_dl{}_df{}_fc{}_eb{}dt{}{}_{}'.format(args.model_id, args.model, args.data, args.features, args.seq_len, args.label_len, args.pred_len, args.d_model, args.n_heads, args.e_layers, args.d_layers, args.d_ff, args.factor, args.embed, args.distil, args.des, ii)

exp = Exp(args)  # set experiments
print('>>>>>>>testing : {}<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<'.format(setting))
exp.test(setting, test=1)
torch.cuda.empty_cache()
 您好,这是我运行的run_longExp.py 文件,麻烦您看一下,是哪里设置的问题吗?
Hank0626 commented 6 days ago

我指的是如何运行run_longExp.py的脚本,而不是文件里内容,类似如下:

if [ ! -d "./logs" ]; then
    mkdir ./logs
fi

if [ ! -d "./logs/LongForecasting" ]; then
    mkdir ./logs/LongForecasting
fi

model_name=PDF
root_path_name=./dataset/
data_path_name=ETTh1.csv
model_id_name=ETTh1
data_name=ETTh1

random_seed=2021

seq_len=720
for pred_len in 96 192
do
  python -u run_longExp.py \
    --random_seed 2021 \
    --is_training 1 \
    --root_path ./dataset/ \
    --data_path ETTh1.csv \
    --model_id ETTh1'_'$seq_len'_'$pred_len \
    --model PDF \
    --data ETTh1 \
    --features M \
    --seq_len $seq_len \
    --pred_len $pred_len \
    --enc_in 7 \
    --e_layers 3 \
    --n_heads 4 \
    --d_model 16 \
    --d_ff 128 \
    --dropout 0.25 \
    --fc_dropout 0.15 \
    --kernel_list 3 7 11 \
    --period 24\
    --patch_len 1\
    --stride 1\
    --des Exp \
    --pct_start 0.2 \
    --train_epochs 100 \
    --patience 10 \
    --itr 1 --batch_size 128 --learning_rate 0.0001 >logs/LongForecasting/$model_name'_'$model_id_name'_'$seq_len'_'$pred_len.log
done