KimMeen / Time-LLM

[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming Large Language Models"
https://arxiv.org/abs/2310.01728
Apache License 2.0
1.02k stars 179 forks source link

Code running error #89

Closed dreamerforwuyang closed 1 month ago

dreamerforwuyang commented 1 month ago

When I set the etth1 data format and find a running error, how can I modify it

E:\python\python.exe K:\project\Time-LLM-main\Time-LLM-main\run_main.py --task_name long_term_forecast --is_training 1 --root_path ./dataset/ETT-small/ --data_path ETTh1.csv --model_id ETTh1_512_96 --model GPT2 --data ETTh1 --features M --seq_len 512 --label_len 48 --pred_len 96 --factor 3 --enc_in 7 --dec_in 7 --c_out 7 --des "'Exp'" --itr 1 --d_model 32 --d_ff 128 --batch_size 24 --learning_rate 0.01 --llm_layers 32 --train_epochs 10 --model_comment "'TimeLLM-ETTh1'" 0it [00:00, ?it/s] Traceback (most recent call last): File "", line 1, in File "E:\python\Lib\multiprocessing\spawn.py", line 120, in spawn_main exitcode = _main(fd, parent_sentinel) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\multiprocessing\spawn.py", line 129, in _main prepare(preparation_data) File "E:\python\Lib\multiprocessing\spawn.py", line 240, in prepare _fixup_main_from_path(data['init_main_from_path']) File "E:\python\Lib\multiprocessing\spawn.py", line 291, in _fixup_main_from_path main_content = runpy.run_path(main_path, ^^^^^^^^^^^^^^^^^^^^^^^^^ File "", line 291, in run_path File "", line 98, in _run_module_code File "", line 88, in _run_code File "K:\project\Time-LLM-main\Time-LLM-main\run_main.py", line 209, in for i, (batch_x, batch_y, batch_x_mark, batch_y_mark) in tqdm(enumerate(train_loader)): File "E:\python\Lib\site-packages\tqdm\std.py", line 1195, in iter for obj in iterable: File "E:\python\Lib\site-packages\accelerate\data_loader.py", line 451, in iter dataloader_iter = super().iter() ^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\site-packages\torch\utils\data\dataloader.py", line 442, in iter return self._get_iterator() ^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\site-packages\torch\utils\data\dataloader.py", line 388, in _get_iterator return _MultiProcessingDataLoaderIter(self) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\site-packages\torch\utils\data\dataloader.py", line 1043, in init w.start() File "E:\python\Lib\multiprocessing\process.py", line 121, in start self._popen = self._Popen(self) ^^^^^^^^^^^^^^^^^ File "E:\python\Lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\multiprocessing\context.py", line 336, in _Popen return Popen(process_obj) ^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\multiprocessing\popen_spawn_win32.py", line 45, in init prep_data = spawn.get_preparation_data(process_obj._name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\python\Lib\multiprocessing\spawn.py", line 158, in get_preparation_data _check_not_importing_main() File "E:\python\Lib\multiprocessing\spawn.py", line 138, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.
kwuking commented 1 month ago

Hi, it looks like this issue is caused by running the process on your local CPU. If you are running it locally on a CPU, you need to set num_workers to 0 and use_gpu parameter to False. You can try running it again with these settings. However, please note that since the base model uses LLaMA2, which is an LLM, it is recommended to use 8 A100 GPUs for execution.