Closed MaoPopovich closed 4 months ago
Excuse me, when I run the first-stage instruction tuning on my machine with two A40 GPUs, the following error occurs:
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. ***************************************** Traceback (most recent call last): File "train_mem.py", line 13, in <module> train() File "/home/qinghua_mao/work/GraphGPT/graphgpt/train/train_graph.py", line 763, in train model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 137, in __init__ File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1551, in __post_init__ and (self.device.type != "cuda") File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 2027, in device return self._setup_devices File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/utils/generic.py", line 63, in __get__ cached = self.fget(obj) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1963, in _setup_devices self.distributed_state = PartialState( File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/accelerate/state.py", line 240, in __init__ torch.cuda.set_device(self.device) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/torch/cuda/__init__.py", line 404, in set_device torch._C._cuda_setDevice(device) RuntimeError: CUDA error: invalid device ordinal Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. You are using a model of type llama to instantiate a model of type GraphLlama. This is not supported for all configurations of models and can yield errors. Traceback (most recent call last): File "train_mem.py", line 13, in <module> train() File "/home/qinghua_mao/work/GraphGPT/graphgpt/train/train_graph.py", line 763, in train model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 137, in __init__ File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1551, in __post_init__ and (self.device.type != "cuda") File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 2027, in device return self._setup_devices File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/utils/generic.py", line 63, in __get__ cached = self.fget(obj) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1963, in _setup_devices self.distributed_state = PartialState( File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/accelerate/state.py", line 240, in __init__ torch.cuda.set_device(self.device) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/torch/cuda/__init__.py", line 404, in set_device torch._C._cuda_setDevice(device) RuntimeError: CUDA error: invalid device ordinal Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. Traceback (most recent call last): File "train_mem.py", line 13, in <module> train() File "/home/qinghua_mao/work/GraphGPT/graphgpt/train/train_graph.py", line 763, in train model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/hf_argparser.py", line 338, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 137, in __init__ File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1551, in __post_init__ and (self.device.type != "cuda") File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 2027, in device return self._setup_devices File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/utils/generic.py", line 63, in __get__ cached = self.fget(obj) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/transformers/training_args.py", line 1963, in _setup_devices self.distributed_state = PartialState( File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/accelerate/state.py", line 240, in __init__ torch.cuda.set_device(self.device) File "/home/qinghua_mao/lib/anaconda3/envs/graphgpt/lib/python3.8/site-packages/torch/cuda/__init__.py", line 404, in set_device torch._C._cuda_setDevice(device) RuntimeError: CUDA error: invalid device ordinal Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
You could refer to this issue to deal with it: https://github.com/lm-sys/FastChat/issues/550.
Excuse me, when I run the first-stage instruction tuning on my machine with two A40 GPUs, the following error occurs: