Open quqxui opened 1 day ago
I think you could follow the suggestion reported here: https://github.com/XiaoxinHe/G-Retriever/issues/29.
In general, you could try with my fork in the colab branch, in which I tried to address some of the issues related to the usage of one GPU: https://github.com/giuseppefutia/G-Retriever/tree/colab.
I tested with the following command and it should work:
!CUDA_LAUNCH_BLOCKING=1 python train.py --dataset webqsp --model_name graph_llm --llm_frozen False --batch_size 1 --eval_batch_size 2
Consider using the parameter --max-memory
we've introduced with this PR: https://github.com/XiaoxinHe/G-Retriever/pull/25
Hi Xiaoxin,
I want to express my appreciation for the incredible work you and your team are doing.
But I encounted a problem: When I attempt to run the process on a single GPU, I get the following error:
When I use two GPUs, the process runs without any issues. However, due to limited resources, using two GPUs is not feasible for me.
Could you please advise on how I might be able to successfully run the fine-tuning on just one GPU?
To roproduce the bug, run: