Closed huizhanx closed 1 month ago
直接拉huggingface的转换好的vicuna
直接拉huggingface的转换好的vicuna 您好, 请问LLaMA Weights是Llama-2吗,https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main?和Delta Weights of Vicuna 合并后,会报 Some weights of LlamaForCausalLM were not initialized from the model checkpoint at
直接拉huggingface的转换好的vicuna 您好, 请问LLaMA Weights是Llama-2吗,https://huggingface.co/meta-llama/Llama-2-7b-hf/tree/main?和Delta Weights of Vicuna 合并后,会报 Some weights of LlamaForCausalLM were not initialized from the model checkpoint at
llama-1
llama-1
请问您有llama-1的参数链接吗,我这边在huggingface上找不到llama-1
这个要去meta那儿申请
---原始邮件--- 发件人: @.> 发送时间: 2024年5月20日(周一) 下午3:09 收件人: @.>; 抄送: "Shengmin @.**@.>; 主题: Re: [CASIA-IVA-Lab/AnomalyGPT] 复现web_demo时, llama的两个bin权重是在哪里下载? (Issue #92)
llama-1
请问您有llama-1的参数链接吗,我这边在huggingface上找不到llama-1
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
是按照README里,先下consolidated.00.pth, 再根据convert_llama_weights_to_hf.py 转为 .bin权重? 但会报: RuntimeError: shape '[32, 2, 2, 4096]' is invalid for input of size 16777216
meta-llama/CodeLlama-7b-hf at main (huggingface.co) 下的 .bin 有三个,Readme里写了只有两个,而且把那三个下下来和Delta Weights of Vicuna 合并时报tensor大小not match问题 RuntimeError: The size of tensor a (32005) must match the size of tensor b (32001) at non-singleton dimension 0
不知道llama的权重该怎么弄