ggerganov / llama.cpp

LLM inference in C/C++
MIT License
64.12k stars 9.18k forks source link

convert-hf-to-gguf.py Qwen1.5-4B-Chat-GPTQ-Int4 error #7505

Closed 0wwafa closed 2 weeks ago

0wwafa commented 3 months ago

git clone --depth 1 --single-branch https://huggingface.co/Qwen/Qwen1.5-4B-Chat-GPTQ-Int4

INFO:hf-to-gguf:Loading model: Qwen1.5-4B-Chat-GPTQ-Int4
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 32768
INFO:hf-to-gguf:gguf: embedding length = 2560
INFO:hf-to-gguf:gguf: feed forward length = 6912
INFO:hf-to-gguf:gguf: head count = 20
INFO:hf-to-gguf:gguf: key-value head count = 20
INFO:hf-to-gguf:gguf: rope theta = 5000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:gguf.vocab:Adding 151387 merge(s).
INFO:gguf.vocab:Setting special token type eos to 151645
INFO:gguf.vocab:Setting special token type pad to 151643
INFO:gguf.vocab:Setting special token type bos to 151643
INFO:gguf.vocab:Setting chat_template to {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
You are a helpful assistant.<|im_end|>
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
INFO:hf-to-gguf:Exporting model to 'Qwen1.5-4B-Chat-GPTQ-Int4\ggml-model-f16.gguf'
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'
INFO:hf-to-gguf:token_embd.weight,         torch.float16 --> F16, shape = {2560, 151936}
INFO:hf-to-gguf:blk.0.attn_norm.weight,    torch.float16 --> F32, shape = {2560}
INFO:hf-to-gguf:blk.0.ffn_down.bias,       torch.float16 --> F32, shape = {2560}
Traceback (most recent call last):
  File "I:\bin4\convert-hf-to-gguf.py", line 2623, in <module>
    main()
  File "I:\bin4\convert-hf-to-gguf.py", line 2617, in main
    model_instance.write()
  File "I:\bin4\convert-hf-to-gguf.py", line 328, in write
    self.write_tensors()
  File "I:\bin4\convert-hf-to-gguf.py", line 264, in write_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\bin4\convert-hf-to-gguf.py", line 231, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\bin4\convert-hf-to-gguf.py", line 182, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.g_idx'
0wwafa commented 3 months ago

and the same happens for int8 version:

convert-hf-to-gguf.py Qwen1.5-14B-Chat-GPTQ-Int8
INFO:hf-to-gguf:Loading model: Qwen1.5-14B-Chat-GPTQ-Int8
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 32768
INFO:hf-to-gguf:gguf: embedding length = 5120
INFO:hf-to-gguf:gguf: feed forward length = 14336
INFO:hf-to-gguf:gguf: head count = 40
INFO:hf-to-gguf:gguf: key-value head count = 40
INFO:hf-to-gguf:gguf: rope theta = 1000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:gguf.vocab:Adding 151387 merge(s).
INFO:gguf.vocab:Setting special token type eos to 151645
INFO:gguf.vocab:Setting special token type pad to 151643
INFO:gguf.vocab:Setting special token type bos to 151643
INFO:gguf.vocab:Setting chat_template to {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
You are a helpful assistant.<|im_end|>
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
INFO:hf-to-gguf:Exporting model to 'Qwen1.5-14B-Chat-GPTQ-Int8\ggml-model-f16.gguf'
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00005.safetensors'
INFO:hf-to-gguf:token_embd.weight,         torch.float16 --> F16, shape = {5120, 152064}
INFO:hf-to-gguf:blk.0.attn_norm.weight,    torch.float16 --> F32, shape = {5120}
INFO:hf-to-gguf:blk.0.ffn_down.bias,       torch.float16 --> F32, shape = {5120}
Traceback (most recent call last):
  File "I:\bin4\convert-hf-to-gguf.py", line 2623, in <module>
    main()
  File "I:\bin4\convert-hf-to-gguf.py", line 2617, in main
    model_instance.write()
  File "I:\bin4\convert-hf-to-gguf.py", line 328, in write
    self.write_tensors()
  File "I:\bin4\convert-hf-to-gguf.py", line 264, in write_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\bin4\convert-hf-to-gguf.py", line 231, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "I:\bin4\convert-hf-to-gguf.py", line 182, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.g_idx'
grapevine-AI commented 3 months ago

Hello I use llama.cpp(windows) to make 16bit-gguf. Similar errors occur.

INFO:hf-to-gguf:Loading model: merge
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 131072
INFO:hf-to-gguf:gguf: embedding length = 8192
INFO:hf-to-gguf:gguf: feed forward length = 22528
INFO:hf-to-gguf:gguf: head count = 64
INFO:hf-to-gguf:gguf: key-value head count = 64
INFO:hf-to-gguf:gguf: rope theta = 8000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: layer norm epsilon = 1e-05
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
INFO:numexpr.utils:Note: NumExpr detected 16 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
INFO:numexpr.utils:NumExpr defaulting to 8 threads.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:gguf.vocab:Adding 253333 merge(s).
INFO:gguf.vocab:Setting special token type bos to 5
INFO:gguf.vocab:Setting special token type eos to 255001
INFO:gguf.vocab:Setting special token type pad to 0
INFO:gguf.vocab:Setting add_bos_token to True
INFO:gguf.vocab:Setting add_eos_token to False
INFO:gguf.vocab:Setting chat_template to [{'name': 'default', 'template': "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% elif false == true %}{% set loop_messages = messages %}{% set system_message = 'You are Command-R, a brilliant, sophisticated, AI-assistant trained to assist human users by providing thorough responses. You are trained by Cohere.' %}{% else %}{% set loop_messages = messages %}{% set system_message = false %}{% endif %}{% if system_message != false %}{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + system_message + '<|END_OF_TURN_TOKEN|>' }}{% endif %}{% for message in loop_messages %}{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}{% endif %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}{% elif message['role'] == 'assistant' %}{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>'  + content.strip() + '<|END_OF_TURN_TOKEN|>' }}{% endif %}{% endfor %}{% if add_generation_prompt %}{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}{% endif %}"}, {'name': 'tool_use', 'template': '{{ bos_token }}{% if messages[0][\'role\'] == \'system\' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0][\'content\'] %}{% else %}{% set loop_messages = messages %}{% set system_message = \'## Task and Context\\nYou help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user\\\'s needs as best you can, which will be wide-ranging.\\n\\n## Style Guide\\nUnless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.\' %}{% endif %}{{ \'<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>\' }}{{ \'# Safety Preamble\' }}{{ \'\nThe instructions in this section override those in the task description and style guide sections. Don\\\'t answer questions that are harmful or immoral.\' }}{{ \'\n\n# System Preamble\' }}{{ \'\n## Basic Rules\' }}{{ \'\nYou are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user\\\'s requests, you cite your sources in your answers, according to those instructions.\' }}{{ \'\n\n# User Preamble\' }}{{ \'\n\' + system_message }}{{\'\n\n## Available Tools\nHere is a list of tools that you have available to you:\n\n\'}}{% for tool in tools %}{% if loop.index0 != 0 %}{{ \'\n\n\'}}{% endif %}{{\'```python\ndef \' + tool.name + \'(\'}}{% for param_name, param_fields in tool.parameter_definitions.items() %}{% if loop.index0 != 0 %}{{ \', \'}}{% endif %}{{param_name}}: {% if not param_fields.required %}{{\'Optional[\' + param_fields.type + \'] = None\'}}{% else %}{{ param_fields.type }}{% endif %}{% endfor %}{{ \') -> List[Dict]:\n    """\'}}{{ tool.description }}{% if tool.parameter_definitions|length != 0 %}{{ \'\n\n    Args:\n        \'}}{% for param_name, param_fields in tool.parameter_definitions.items() %}{% if loop.index0 != 0 %}{{ \'\n        \' }}{% endif %}{{ param_name + \' (\'}}{% if not param_fields.required %}{{\'Optional[\' + param_fields.type + \']\'}}{% else %}{{ param_fields.type }}{% endif %}{{ \'): \' + param_fields.description }}{% endfor %}{% endif %}{{ \'\n    """\n    pass\n```\' }}{% endfor %}{{ \'<|END_OF_TURN_TOKEN|>\'}}{% for message in loop_messages %}{% set content = message[\'content\'] %}{% if message[\'role\'] == \'user\' %}{{ \'<|START_OF_TURN_TOKEN|><|USER_TOKEN|>\' + content.strip() + \'<|END_OF_TURN_TOKEN|>\' }}{% elif message[\'role\'] == \'system\' %}{{ \'<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>\' + content.strip() + \'<|END_OF_TURN_TOKEN|>\' }}{% elif message[\'role\'] == \'assistant\' %}{{ \'<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>\'  + content.strip() + \'<|END_OF_TURN_TOKEN|>\' }}{% endif %}{% endfor %}{{\'<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>Write \\\'Action:\\\' followed by a json-formatted list of actions that you want to perform in order to produce a good response to the user\\\'s last input. You can use any of the supplied tools any number of times, but you should aim to execute the minimum number of necessary actions for the input. You should use the `directly-answer` tool if calling the other tools is unnecessary. The list of actions you want to call should be formatted as a list of json objects, for example:\n```json\n[\n    {\n        "tool_name": title of the tool in the specification,\n        "parameters": a dict of parameters to input into the tool as they are defined in the specs, or {} if it takes no parameters\n    }\n]```<|END_OF_TURN_TOKEN|>\'}}{% if add_generation_prompt %}{{ \'<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>\' }}{% endif %}'}, {'name': 'rag', 'template': "{{ bos_token }}{% if messages[0]['role'] == 'system' %}{% set loop_messages = messages[1:] %}{% set system_message = messages[0]['content'] %}{% else %}{% set loop_messages = messages %}{% set system_message = '## Task and Context\\nYou help people answer their questions and other requests interactively. You will be asked a very wide array of requests on all kinds of topics. You will be equipped with a wide range of search engines or similar tools to help you, which you use to research your answer. You should focus on serving the user\\'s needs as best you can, which will be wide-ranging.\\n\\n## Style Guide\\nUnless the user asks for a different style of answer, you should answer in full sentences, using proper grammar and spelling.' %}{% endif %}{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' }}{{ '# Safety Preamble' }}{{ '\nThe instructions in this section override those in the task description and style guide sections. Don\\'t answer questions that are harmful or immoral.' }}{{ '\n\n# System Preamble' }}{{ '\n## Basic Rules' }}{{ '\nYou are a powerful conversational AI trained by Cohere to help people. You are augmented by a number of tools, and your job is to use and consume the output of these tools to best help the user. You will see a conversation history between yourself and a user, ending with an utterance from the user. You will then see a specific instruction instructing you what kind of response to generate. When you answer the user\\'s requests, you cite your sources in your answers, according to those instructions.' }}{{ '\n\n# User Preamble' }}{{ '\n' + system_message }}{{ '<|END_OF_TURN_TOKEN|>'}}{% for message in loop_messages %}{% set content = message['content'] %}{% if message['role'] == 'user' %}{{ '<|START_OF_TURN_TOKEN|><|USER_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}{% elif message['role'] == 'system' %}{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' + content.strip() + '<|END_OF_TURN_TOKEN|>' }}{% elif message['role'] == 'assistant' %}{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>'  + content.strip() + '<|END_OF_TURN_TOKEN|>' }}{% endif %}{% endfor %}{{ '<|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>'}}{{ '<results>' }}{% for document in documents %}{{ '\nDocument: ' }}{{ loop.index0 }}\n{% for key, value in document.items() %}{{ key }}: {{value}}\n{% endfor %}{% endfor %}{{ '</results>'}}{{ '<|END_OF_TURN_TOKEN|><|START_OF_TURN_TOKEN|><|SYSTEM_TOKEN|>' }}{{ 'Carefully perform the following instructions, in order, starting each with a new line.\n' }}{{ 'Firstly, Decide which of the retrieved documents are relevant to the user\\'s last input by writing \\'Relevant Documents:\\' followed by comma-separated list of document numbers. If none are relevant, you should instead write \\'None\\'.\n' }}{{ 'Secondly, Decide which of the retrieved documents contain facts that should be cited in a good answer to the user\\'s last input by writing \\'Cited Documents:\\' followed a comma-separated list of document numbers. If you dont want to cite any of them, you should instead write \\'None\\'.\n' }}{% if citation_mode=='accurate' %}{{ 'Thirdly, Write \\'Answer:\\' followed by a response to the user\\'s last input in high quality natural english. Use the retrieved documents to help you. Do not insert any citations or grounding markup.\n' }}{% endif %}{{ 'Finally, Write \\'Grounded answer:\\' followed by a response to the user\\'s last input in high quality natural english. Use the symbols <co: doc> and </co: doc> to indicate when a fact comes from a document in the search result, e.g <co: 0>my fact</co: 0> for a fact from document 0.' }}{{ '<|END_OF_TURN_TOKEN|>' }}{% if add_generation_prompt %}{{ '<|START_OF_TURN_TOKEN|><|CHATBOT_TOKEN|>' }}{% endif %}"}]
INFO:hf-to-gguf:Exporting model to 'F:\Users\Public\Downloads\dare_ties\merge\ggml-model-f16.gguf'
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00082.safetensors'
Traceback (most recent call last):
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 2623, in <module>
    main()
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 2617, in main
    model_instance.write()
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 328, in write
    self.write_tensors()
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 264, in write_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 231, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
  File "C:\Users\kirby\llama.cpp\convert-hf-to-gguf.py", line 182, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'lm_head.weight'

(This is merge model based on command R, but I could make this work with transformers. So I think the model is not broken.)

Diamochang commented 2 months ago

This error was successfully reproduced when I converting Qwen2-7B-Instruct-GPTQ-Int4 via the llama.cpp built into the latest version of Ollama for Windows.

Command: python llm/llama.cpp/convert-hf-to-gguf.py ./model --outtype f16 --outfile converted.bin

Output:

INFO:hf-to-gguf:Loading model: model
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Set model parameters
INFO:hf-to-gguf:gguf: context length = 32768
INFO:hf-to-gguf:gguf: embedding length = 3584
INFO:hf-to-gguf:gguf: feed forward length = 18944
INFO:hf-to-gguf:gguf: head count = 28
INFO:hf-to-gguf:gguf: key-value head count = 4
INFO:hf-to-gguf:gguf: rope theta = 1000000.0
INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06
INFO:hf-to-gguf:gguf: file type = 1
INFO:hf-to-gguf:Set model tokenizer
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:gguf.vocab:Adding 151387 merge(s).
INFO:gguf.vocab:Setting special token type eos to 151645
INFO:gguf.vocab:Setting special token type pad to 151643
INFO:gguf.vocab:Setting special token type bos to 151643
INFO:gguf.vocab:Setting chat_template to {% for message in messages %}{% if loop.first and messages[0]['role'] != 'system' %}{{ '<|im_start|>system
You are a helpful assistant.<|im_end|>
' }}{% endif %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
INFO:hf-to-gguf:Exporting model to 'converted.bin'
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'
INFO:hf-to-gguf:token_embd.weight,         torch.float16 --> F16, shape = {3584, 152064}
INFO:hf-to-gguf:blk.0.attn_norm.weight,    torch.float16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.0.ffn_down.bias,       torch.float16 --> F32, shape = {3584}
Traceback (most recent call last):
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 2881, in <module>
    main()
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 2875, in main
    model_instance.write()
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 328, in write
    self.write_tensors()
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 265, in write_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 232, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\[Edited]\ollama\llm\llama.cpp\convert-hf-to-gguf.py", line 183, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.g_idx'
Diamochang commented 2 months ago

According to the comments, this error has been reproduced several times in the Qwen series models. Perhaps using the Qwen Team's own GGUF for these models could solve this problem.

For my own Qwen2-7B-Instruct, it comes with a version of GGUF named Qwen2-7B-Instruct-GGUF, which is available at ModelScope (Mainland China) and Hugging Face.

github-actions[bot] commented 2 weeks ago

This issue was closed because it has been inactive for 14 days since being marked as stale.

dipeshpaulsystango commented 5 days ago

Facing Same Issue

INFO:hf-to-gguf:Loading model: gemma-2b-dpo-uncensored-4bit-mitkox
INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only
INFO:hf-to-gguf:Exporting model...
INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'
INFO:hf-to-gguf:token_embd.weight,         torch.float16 --> Q8_0, shape = {3072, 256000}
INFO:hf-to-gguf:blk.0.attn_norm.weight,    torch.float16 --> F32, shape = {3072}
Traceback (most recent call last):
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 3823, in <module>
    main()
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 3817, in main
    model_instance.write()
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 400, in write
    self.prepare_tensors()
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 285, in prepare_tensors
    for new_name, data in ((n, d.squeeze().numpy()) for n, d in self.modify_tensors(data_torch, name, bid)):
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 2663, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
  File "/content/drive/MyDrive/trainer_v2_models/ollama/llm/llama.cpp/convert_hf_to_gguf.py", line 200, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.biases'