Vision-CAIR / MiniGPT-4

Open-sourced codes for MiniGPT-4 and MiniGPT-v2 (https://minigpt-4.github.io, https://minigpt-v2.github.io/)
https://minigpt-4.github.io
BSD 3-Clause "New" or "Revised" License
25.33k stars 2.91k forks source link

`load_in_8bit_fp32_cpu_offload=True #39

Open thibaudart opened 1 year ago

thibaudart commented 1 year ago

Any idea how to solve this:

Some modules are dispatched on the CPU or the disk. Make sure you have enough GPU RAM to fit the quantized model. If you want to dispatch the model on the CPU or the disk while keeping these modules in 32-bit, you need to set load_in_8bit_fp32_cpu_offload=True and pass a custom device_map to from_pretrained. Check https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu for more details.

I have 48gb of vram the GPU RAM must be enough!

TsuTikgiau commented 1 year ago

48GPU ram should be enough for the demo without the 8bit. Can you set the low_resource to False in eval_configs/minigpt4_eval.yaml and check whether you still have this issue?

vrunm commented 1 year ago

I have followed the code given in the huggingface docs:

device_map = {
    "transformer.word_embeddings": 0,
    "transformer.word_embeddings_layernorm": 0,
    "lm_head": "cpu",
    "transformer.h": 0,
    "transformer.ln_f": 0,
}

quantization_config = BitsAndBytesConfig(llm_int8_enable_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map='auto', quantization_config=quantization_config)
tokenizer = AutoTokenizer.from_pretrained("AlekseyKorshuk/vicuna-7b")

Getting this error

TypeError: __init__() got an unexpected keyword argument 
'load_in_8bit_fp32_cpu_offload'
diaojunxian commented 1 year ago

try this:

model = AutoModelForCausalLM.from_pretrained("AlekseyKorshuk/vicuna-7b",device_map=device_map, quantization_config=quantization_config)

mirajdeepbhandari commented 6 months ago

i solve that error like this you can do it same for your model

Load model and tokenizer

quantization_config = BitsAndBytesConfig(load_in_8bit_fp32_cpu_offload=True)

model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", quantization_config=quantization_config) model = PeftModel.from_pretrained(model, "mirajbhandari/mistral-7b-chat-finetune", device_map="auto")

tokenizer = AutoTokenizer.from_pretrained("mirajbhandari/mistral-7b-chat-finetune")