Closed bkj closed 4 months ago
Seems like disabling quantization_config
in AutoModelForCausalLM.from_pretrained
gets the model to load + seems to give decent results?
model = AutoModelForCausalLM.from_pretrained(
base_model_id, # Mistral, same as before
# quantization_config=bnb_config,
device_map="auto",
trust_remote_code=True,
)
But ... even if it loads, this may not be handling quantization correctly ...
When I run the tutorial here: https://github.com/brevdev/notebooks/blob/main/mistral-finetune.ipynb everything works until
which gives me:
I'm running
(not sure what other package versions are relevant, but happy to share)
Anyone have any thoughts? Thanks!