Open wallaceloos opened 11 months ago
It is strange, maybe you can try to use the baiduyun link first which do not need to do unzip and I have checked all the layers you mentioned contained in the .bin
file. I will check the correctness of the zip files later.
Hi, I have checked the problem. You can refer to this Support Transformers 4.31.0. It is caused by the version of the transfromers package. You can try to degrade it to 4.28.1 which is the default version we used.
Thank you very much, that solved the problem.
Just one more question, if I want to add more prompts (questions) can I create a list of questions?
For instance:
question = [“Can you identify any visible signs of Cardiomegaly in the image?”, “What is the modality of the image?”]
Thanks again.
hm,you may also duplicate the image dict variable to length 2, indicating the image input for the second question. Our current version does not consider multiple rounds of dialogue. In fact, if you want to ask another question the easiest way is to just rewrite the previous question.
Thank you for making available your model. Awesome work!
I was trying to load the model and I got the error message (below). I downloaded all files and then I concatenate them doing
cat RadFM.z* > model.zip
. Then I unzip performing:unzip model.zip
and get the filepytorch_model.bin
. Am I doing this procedure right? Thank you.RuntimeError: Error(s) in loading state_dict for MultiLLaMAForCausalLM: Unexpected key(s) in state_dict: "lang_model.model.layers.0.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.1.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.2.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.3.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.4.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.5.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.6.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.7.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.8.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.9.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.10.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.11.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.12.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.13.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.14.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.15.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.16.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.17.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.18.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.19.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.20.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.21.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.22.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.23.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.24.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.25.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.26.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.27.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.28.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.29.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.30.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.31.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.32.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.33.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.34.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.35.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.36.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.37.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.38.self_attn.rotary_emb.inv_freq", "lang_model.model.layers.39.self_attn.rotary_emb.inv_freq", "embedding_layer.bert_model.embeddings.position_ids".