Open SeekPoint opened 1 year ago
I got a workarond:
(gh_Vicuna-LoRA-RLHF-PyTorch) amd00@asus00:~/llm_dev/Vicuna-LoRA-RLHF-PyTorch$ git diff diff --git a/supervised_finetune.py b/supervised_finetune.py index 4cfbc76..a850789 100644 --- a/supervised_finetune.py +++ b/supervised_finetune.py @@ -71,7 +71,6 @@ if ddp: print(args.model_path) model = LlamaForCausalLM.from_pretrained( args.model_path,
(gh_Vicuna-LoRA-RLHF-PyTorch) amd00@asus00:~/llm_dev/Vicuna-LoRA-RLHF-PyTorch$ python supervised_finetune.py --data_path './data/merge_sample.json' --output_path 'lora-Vicuna' --model_path './weights/vicuna-7b' --eval_steps 200 --save_steps 200 --test_size 1
===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
bin /home/amd00/anaconda3/envs/gh_Vicuna-LoRA-RLHF-PyTorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/amd00/anaconda3/envs/gh_Vicuna-LoRA-RLHF-PyTorch/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/amd00/anaconda3/envs/gh_Vicuna-LoRA-RLHF-PyTorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 CUDA SETUP: Loading binary /home/amd00/anaconda3/envs/gh_Vicuna-LoRA-RLHF-PyTorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so... ./weights/vicuna-7b Overriding torch_dtype=None with │
│ │
│ 69 │ device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} │
│ 70 │ GRADIENT_ACCUMULATION_STEPS = GRADIENT_ACCUMULATION_STEPS // world_size │
│ 71 print(args.model_path) │
│ ❱ 72 model = LlamaForCausalLM.from_pretrained( │
│ 73 │ args.model_path, │
│ 74 │ load_in_8bit=True, │
│ 75 │ device_map=device_map │
│ │
│ /home/amd00/.local/lib/python3.10/site-packages/transformers/modeling_utils.py:2740 in │
│ from_pretrained │
│ │
│ 2737 │ │ │ │ │ key: device_map[key] for key in device_map.keys() if key not in modu │
│ 2738 │ │ │ │ } │
│ 2739 │ │ │ │ if "cpu" in device_map_without_lm_head.values() or "disk" in devicemap │
│ ❱ 2740 │ │ │ │ │ raise ValueError( │
│ 2741 │ │ │ │ │ │ """ │
│ 2742 │ │ │ │ │ │ Some modules are dispatched on the CPU or the disk. Make sure yo │
│ 2743 │ │ │ │ │ │ the quantized model. If you want to dispatch the model on the CP │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError:
Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit
the quantized model. If you want to dispatch the model on the CPU or the disk while keeping
these modules in 32-bit, you need to set
torch_dtype=torch.float16
due to requirements ofbitsandbytes
to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning. ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /home/amd00/llm_dev/Vicuna-LoRA-RLHF-PyTorch/supervised_finetune.py:72 inload_in_8bit_fp32_cpu_offload=True
and pass a customdevice_map
tofrom_pretrained
. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.(gh_Vicuna-LoRA-RLHF-PyTorch) amd00@asus00:~/llm_dev/Vicuna-LoRA-RLHF-PyTorch$