wayveai / Driving-with-LLMs

PyTorch implementation for the paper "Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous Driving"
Apache License 2.0
408 stars 38 forks source link

RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 #4

Open Phoebe-ovo opened 11 months ago

Phoebe-ovo commented 11 months ago

Hello, when I evaluate for Perception and Action Prediction, I got this error for decapoda-research/llama-7b-hf. How can I fix this? Thanks!

xzebin775 commented 11 months ago

Hi, have you solved it, I come across the same problem

melights commented 11 months ago

Hi @Phoebe-ovo @xzebin775 thanks for reporting this issue! We are not able to reproduce this error on the GPUs we have. Could you please let me know what GPUs were you using? Also, can you try setting load_in_8bit to False to see if this issue can be solved?

Phoebe-ovo commented 11 months ago

The GPU I used is V100, what GPUs were you using?

xzebin775 commented 11 months ago

It is GTX 1080 Ti.

melights commented 11 months ago

Thanks for confirming. Can you try if setting load_in_8bit to False in here solves the problem?

xzebin775 commented 10 months ago

I set load_in_8bit to False, but I get the error below. It seems I cann't load the model to GPU

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 10.92 GiB total capacity; 10.44 GiB already allocated; 22.62 MiB free; 10.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

melights commented 10 months ago

Upon thorough investigation, we are not able to reproduce the error on the GPUs we have (NVIDIA A100 and 3090), but it might related to other issues. I suggest you try these:

Also, we noticed the base model we used decapoda-research/llama-7b-hf was removed by the author in the huggingface model repo and we are testing the workarounds.

xjturjc commented 8 months ago

I meet the same problem, and I set "do_sample" = False, then it worked. Don't know what impact this will have. (same with GPUV100)

kevinchiu19 commented 6 months ago

Change 'decapoda-research/llama-7b-hf' to 'huggyllama/llama-7b' and 'load_in_8bit=False'.

It works for me. (My env v100)

uniquezhengjie commented 6 months ago

Change 'decapoda-research/llama-7b-hf' to 'huggyllama/llama-7b' and 'load_in_8bit=False'.

It works for me. (My env v100)

I get error: ValueError: The device_map provided does not give any device for the following parameters: base_model.model.weighted_mask