Closed nevilshah235 closed 5 months ago
Code
`import os
from bunny.model.builder import load_pretrained_model from bunny.util.mm_utils import get_model_name_from_path
model-path = 'bunny-phi-1.5-siglip-lora' model_base = 'phi-1_5' model_type = 'phi-1.5'
model_path = os.path.expanduser(model_path) model_name = get_model_name_from_path(model_path)
tokenizer, model, image_processor, context_len = load_pretrained_model(model_path, model_base, model_name, model_type)`
When I am trying to generate the merged lora weights for the model in 4bit and 8bit I am getting the following errors.
4bit model
8bit model
Can the bunny models be loaded in 4bit or 8bit quantised modes?