Closed KL4805 closed 3 months ago
Thank you for your interest in our work. The GPU configuration for urbangpt is 8*A100-PCIE 40g.
Thanks for your response.
Did you use Lora in constructing your UrbanGPT? I know that when VLLM is used to serve LoRA models on V100 GPUs, such an error may occur. Could you please check?
We did not use Lora in UrbanGPT. Unfortunately, we have not encountered a similar error before and may not be able to provide an effective solution. It's possible that this issue is related to compatibility problems among the GPU, CUDA, cuDNN, and torch versions.
Thanks for your answers anyway. I will try to remove all ray/vllm/fastchat related stuff and use huggingface only and see what happens.
Hello authors,
Thanks for your research work and code!
I would like to run
urbangpt_eval.sh
on an NVIDIA V100 GPU, yet encountered the following error.Could you please let me know your hardware platforms so that I can diagnose the issue?