Open charismaticchiu opened 8 months ago
@haotian-liu
Do you think it is caused by the non-default max_length
or do_sample=True
?
Should I manually unset max_length
or set do_sample: true
in the config.json? I actually tried both, but still has the same warning. Maybe the problem comes from the lmsys/vicuna-13b-v1.5
checkpoint?
Same issue here
I solved this issue by downgrading the Transformer version with pip install git+https://github.com/huggingface/transformers@v4.31-release
Question
Hi I finetune my own model with LoRA,
new-v1.5-13b-lora-665k-custom
usingfinetune_lora.sh
but have trouble merging the LoRA weights withlmsys/vicuna-13b-v1.5
backbone.Can anyone shed some light? Thank you!
The command I used is
And error is below
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/project/LLaVA/scripts/merge_lora_weights.py", line 22, in
merge_lora(args)
File "/project/LLaVA/scripts/merge_lora_weights.py", line 10, in merge_lora
model.save_pretrained(args.save_model_path)
File "/home1/XXX/.conda/envs/llava2/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2364, in save_pretrained
model_to_save.generation_config.save_pretrained(save_directory)
File "/home1/XXX/.conda/envs/llava2/lib/python3.10/site-packages/transformers/generation/configuration_utils.py", line 560, in save_pretrained
raise ValueError(
ValueError: The generation config instance is invalid --
.validate()
throws warnings and/or exceptions. Fix these issues to save the configuration.Thrown during validation: [UserWarning('
do_sample
is set toFalse
. However,temperature
is set to0.9
-- this flag is only used in sample-based generation modes. You should setdo_sample=True
or unsettemperature
.'), UserWarning('do_sample
is set toFalse
. However,top_p
is set to0.6
-- this flag is only used in sample-based generation modes. You should setdo_sample=True
or unsettop_p
.')]