meta-llama / llama

Inference code for Llama models
Other
56.65k stars 9.59k forks source link

Facing this error while running for the first time #1005

Open KingstonJoseph12 opened 10 months ago

KingstonJoseph12 commented 10 months ago

Describe the bug

torchrun --nproc_per_node 1 example_chat_completion_test.py \ --ckpt_dir D:\Coding\Environment\Llama 2\llama-main\llama-2-13b \ --tokenizer_path D:\Coding\Environment\Llama 2\llama-main\tokenizer.model \ --max_seq_len 512 --max_batch_size 6

Output

NOTE: Redirects are currently not supported in Windows or MacOs.
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [DESKTOP-J07NS1T]:29500 (system error: 10049 - The requested address is not valid in its context.).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [DESKTOP-J07NS1T]:29500 (system error: 10049 - The requested address is not valid in its context.).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [DESKTOP-J07NS1T]:29500 (system error: 10049 - The requested address is not valid in its context.).
[W ..\torch\csrc\distributed\c10d\socket.cpp:601] [c10d] The client socket has failed to connect to [DESKTOP-J07NS1T]:29500 (system error: 10049 - The requested address is not valid in its context.).
> initializing model parallel with size 1
> initializing ddp with size 1
> initializing pipeline with size 1
Traceback (most recent call last):
  File "D:\Coding\Environment\Llama_2\llama-main\example_chat_completion_test.py", line 109, in <module>
    fire.Fire(main)
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\fire\core.py", line 141, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\fire\core.py", line 475, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
                                ^^^^^^^^^^^^^^^^^^^^
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
                ^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Coding\Environment\Llama_2\llama-main\example_chat_completion_test.py", line 40, in main
    generator = Llama.build(
                ^^^^^^^^^^^^
  File "D:\Coding\Environment\Llama_2\llama-main\llama\generation.py", line 102, in build
    assert len(checkpoints) > 0, f"no checkpoint files found in {ckpt_dir}"
           ^^^^^^^^^^^^^^^^^^^^
AssertionError: no checkpoint files found in D:\Coding\Environment\Llama
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 26352) of binary: D:\Coding\Environment\llama_envs\python.exe
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "D:\Coding\Environment\llama_envs\Scripts\torchrun.exe\__main__.py", line 7, in <module>
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\torch\distributed\elastic\multiprocessing\errors\__init__.py", line 346, in wrapper
    return f(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\torch\distributed\run.py", line 794, in main
    run(args)
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\torch\distributed\run.py", line 785, in run
    elastic_launch(
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\torch\distributed\launcher\api.py", line 134, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Coding\Environment\llama_envs\Lib\site-packages\torch\distributed\launcher\api.py", line 250, in launch_agent    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
example_chat_completion_test.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2024-01-20_05:28:04
  host      : DESKTOP-J07NS1T
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 26352)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

Runtime Environment

subramen commented 10 months ago

The error seems to be occurring because the checkpoint_dir has spaces that haven't been escaped AssertionError: no checkpoint files found in D:\Coding\Environment\Llama

Maybe try with --ckpt_dir "D:\Coding\Environment\Llama 2\llama-main\llama-2-13b" or just remove the space in the foldername (Llama 2)?