Closed kavindie closed 6 months ago
The info of IncompatibleKeys
is normal and newly initial llama key normally doesn't have it, please consider whether it is related to low_resource
. Considering that natural language can be output instead of garbled code, I think the loading of weights is correct, but the effect may be worse due to low_resource
.
Dear authors, Thank you for your work and the constant support with the GitHub repo, truly appreciate it.
When I run the demo I get the following warnings:
I have followed the correct instructions for getting the vicuna-7b-v0 using the following links: Download the original llama 7B weights Download the vicuna delta Get Vicuna weights by applying delta using
python3 -m fastchat.model.apply_delta --base-model-path /path/to/llama-7b --target-model-path /path/to/output/vicuna-7b --delta-path lmsys/vicuna-7b-delta-v0
Please note I am using the
low_resource=True
.Can you please tell me if I am missing some crucial step along the way or any idea on why I am getting this warning?
I am asking this because, the answers generated by the model are too simple/wrong. For example if I ask a question like "Why do you think this is a video of a man riding a bicycle?", the answer is " The video shows a man riding a bicycle in a tunnel". The video is from an ego-centric view and not of a bicycle rider. Also for questions like "what colour was the backpack?", the answer given was "black" when the correct answer was "red".
Thank you!