Open Phoebe-ovo opened 11 months ago
Hi, have you solved it, I come across the same problem
Hi @Phoebe-ovo @xzebin775 thanks for reporting this issue! We are not able to reproduce this error on the GPUs we have. Could you please let me know what GPUs were you using?
Also, can you try setting load_in_8bit
to False to see if this issue can be solved?
The GPU I used is V100, what GPUs were you using?
It is GTX 1080 Ti.
Thanks for confirming. Can you try if setting load_in_8bit to False in here solves the problem?
I set load_in_8bit to False, but I get the error below. It seems I cann't load the model to GPU
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 86.00 MiB (GPU 0; 10.92 GiB total capacity; 10.44 GiB already allocated; 22.62 MiB free; 10.45 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Upon thorough investigation, we are not able to reproduce the error on the GPUs we have (NVIDIA A100 and 3090), but it might related to other issues. I suggest you try these:
Also, we noticed the base model we used decapoda-research/llama-7b-hf
was removed by the author in the huggingface model repo and we are testing the workarounds.
I meet the same problem, and I set "do_sample" = False, then it worked. Don't know what impact this will have. (same with GPUV100)
Change 'decapoda-research/llama-7b-hf' to 'huggyllama/llama-7b' and 'load_in_8bit=False'.
It works for me. (My env v100)
Change 'decapoda-research/llama-7b-hf' to 'huggyllama/llama-7b' and 'load_in_8bit=False'.
It works for me. (My env v100)
I get error: ValueError: The device_map provided does not give any device for the following parameters: base_model.model.weighted_mask
Hello, when I evaluate for Perception and Action Prediction, I got this error for decapoda-research/llama-7b-hf. How can I fix this? Thanks!