flowersteam / lamorel

Lamorel is a Python library designed for RL practitioners eager to use Large Language Models (LLMs).
MIT License
190 stars 18 forks source link

Could I directly run lamorel/examples/PPO_finuetuning/main.py? #7

Closed yanxue7 closed 1 year ago

yanxue7 commented 1 year ago

Can I directly run lamorel/examples/PPO_finuetuning/main.py? I wonder about how to run the PPO_finuetuning/main.py through bash, I run "torchrun main.py" and modify the line as @hydra.main(config_path='./', config_name='local_gpu_config.yaml'). But some wrongs happen: Error executing job with overrides: [] Traceback (most recent call last): File "/home/yanxue/Grounding/lamorel/examples/PPO_finetuning/main.py", line 166, in main lm_server = Caller(config_args.lamorel_args, File "/home/yanxue/Grounding/lamorel/lamorel/src/lamorel/caller.py", line 43, in init self._llm_group = dist.new_group( File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2960, in new_group raise RuntimeError( RuntimeError: The new group's rank should be within the the world_size set by init_process_group

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 968832) of binary: /home/yanxue/anaconda3/envs/dlp/bin/python Traceback (most recent call last): File "/home/yanxue/anaconda3/envs/dlp/bin/torchrun", line 33, in sys.exit(load_entry_point('torch==1.12.1', 'console_scripts', 'torchrun')()) File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 345, in wrapper return f(*args, **kwargs) File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/run.py", line 761, in main run(args) File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/run.py", line 752, in run elastic_launch( File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 131, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/home/yanxue/anaconda3/envs/dlp/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

main.py FAILED

Failures:

------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2023-05-11_22:51:31 host : taizun-R282-Z96-00 rank : 0 (local_rank: 0) exitcode : 1 (pid: 968832) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
ClementRomac commented 1 year ago

Hi,

As Lamorel is itself calling torchrun could you first consider launching your script with python main.py?

ClementRomac commented 1 year ago

Closing this because of no activity

yanxue7 commented 1 year ago

I have successfully run the main.py using the command in Readme. Thank you very much!

ClementRomac commented 1 year ago

Happy to hear that :)