DAMO-DI-ML / KDD2023-DCdetector

189 stars 21 forks source link

Training time for the SMD dataset is too long #13

Closed WeixuWang closed 1 year ago

WeixuWang commented 1 year ago

As mentioned in the paper, when the d_model size is set as 256, the average running time of 100 iterations is less than 0.2s. However, when I set batch size=32 and d_model=256, the duration of each iteration is 3~4s, which is much slower. The GPU I am using is 3090, is there any problem with the parameter settings?

image

tianzhou2011 commented 1 year ago

check the script, we use "export CUDA_VISIBLE_DEVICES=3", If you have only one GPU, change it to 0

stillwang96 commented 1 year ago

As mentioned in the paper, when the d_model size is set as 256, the average running time of 100 iterations is less than 0.2s. However, when I set batch size=32 and d_model=256, the duration of each iteration is 3~4s, which is much slower. The GPU I am using is 3090, is there any problem with the parameter settings?

image

Hi, my GPU is 3090 too, and I don't know whether the running time of mine is normal. Please help me check it if you have changed your GPU to 0. Thank you.

================ Hyperparameters =============== anormly_ratio: 0.6 batch_size: 256 d_model: 256 data_path: SMD dataset: SMD device_ids: [0, 1, 2, 3] devices: 0,1,2,3 e_layers: 3 gpu: 0 index: 137 input_c: 38 k: 3 loss_fuc: MSE lr: 0.0001 mode: train model_save_path: checkpoints n_heads: 1 num_epochs: 2 output_c: 38 patch_size: [5, 7] rec_timeseries: True use_gpu: True use_multi_gpu: True win_size: 105 ==================== Train =================== speed: 0.5840s/iter; left time: 3172.7369s speed: 0.5829s/iter; left time: 3108.4683s speed: 0.5827s/iter; left time: 3049.0521s speed: 0.5852s/iter; left time: 3004.0091s speed: 0.5883s/iter; left time: 2961.1121s speed: 0.5838s/iter; left time: 2880.1116s speed: 0.5809s/iter; left time: 2807.3317s