datamllab / LongLM

[ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
https://arxiv.org/pdf/2401.01325.pdf
MIT License
597 stars 59 forks source link

Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! when resuming training #37

Open humza-sami opened 4 months ago

humza-sami commented 4 months ago

I tried to run example.py on an A100 (80GB) GPU. It seems there is a bug at line [41] https://github.com/datamllab/LongLM/blob/ee92c841eaf8c6e0989f49c2d63231ba06136345/example.py#L41

The current implementation doesn't load the input_ids tensors onto the device, which causes an error. I replaced the above code, and it's now working. Fixed the issue by adding: input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")