wayveai / Driving-with-LLMs

PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
Apache License 2.0
408 stars 38 forks source link

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! #21

Closed guiyuliu closed 5 months ago

guiyuliu commented 5 months ago

File "/home/~~/miniconda3/envs/drive_llm/lib/python3.8/site-packages/transformers/models/llama/modeling_llama.py", line 90, in forward return self.weight * hidden_states.to(input_dtype) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!

guiyuliu commented 5 months ago

This error will not show when using 1 GPU. How do I modify this to run on multiple GPUs? I run the Evaluate for DrivingQA command