aws-neuron / aws-neuron-sdk

Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and integrated with your favorite AWS services
https://aws.amazon.com/machine-learning/neuron/
Other
466 stars 154 forks source link

RuntimeError when running llama2_inference.ipynb #849

Open crane-sapia opened 8 months ago

crane-sapia commented 8 months ago

Hi all,

I was following the tut here to run the trace on llama2-7B. However, when running this line in Trace the model section:

runner.trace(traced_model_path=traced_model_path,
             tp_degree=tp_degree,
             batch_size=batch_size,
             max_context_length=max_context_length,
             max_new_tokens=max_new_tokens)

I encountered the following error:

Error Information ```shell Traceback (most recent call last): File "", line 1, in File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 125, in _main prepare(preparation_data) File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "/usr/lib64/python3.9/runpy.py", line 288, in run_path return _run_module_code(code, init_globals, run_name, File "/usr/lib64/python3.9/runpy.py", line 97, in _run_module_code _run_code(code, mod_globals, init_globals, File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/ec2-user/neuronx-distributed/examples/inference/llama2_inference.py", line 14, in runner.trace( File "/home/ec2-user/neuronx-distributed/examples/inference/runner.py", line 183, in trace traced_model = parallel_model_trace( File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/neuronx_distributed/trace/trace.py", line 144, in parallel_model_trace manager = ctx.Manager() File "/usr/lib64/python3.9/multiprocessing/context.py", line 57, in Manager m.start() File "/usr/lib64/python3.9/multiprocessing/managers.py", line 554, in start self._process.start() File "/usr/lib64/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/usr/lib64/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/usr/lib64/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/usr/lib64/python3.9/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/usr/lib64/python3.9/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/usr/lib64/python3.9/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. Traceback (most recent call last): File "/home/ec2-user/neuronx-distributed/examples/inference/llama2_inference.py", line 14, in runner.trace( File "/home/ec2-user/neuronx-distributed/examples/inference/runner.py", line 183, in trace traced_model = parallel_model_trace( File "/home/ec2-user/aws_neuron_venv_pytorch/lib64/python3.9/site-packages/neuronx_distributed/trace/trace.py", line 144, in parallel_model_trace manager = ctx.Manager() File "/usr/lib64/python3.9/multiprocessing/context.py", line 57, in Manager m.start() File "/usr/lib64/python3.9/multiprocessing/managers.py", line 558, in start self._address = reader.recv() File "/usr/lib64/python3.9/multiprocessing/connection.py", line 254, in recv buf = self._recv_bytes() File "/usr/lib64/python3.9/multiprocessing/connection.py", line 418, in _recv_bytes buf = self._recv(4) File "/usr/lib64/python3.9/multiprocessing/connection.py", line 387, in _recv raise EOFError EOFError ```

Can anyone give some hints on how to address this problem?

aws-taylor commented 8 months ago

Hello @crane-sapia,

One of our engineers is working to reproduce the issue. We'll update this issue once we've got more information.

-Taylor

matkle commented 4 months ago

Hi @aws-taylor, any updates on this? Thx!