unslothai / unsloth

Finetune Llama 3.2, Mistral, Phi, Qwen 2.5 & Gemma LLMs 2-5x faster with 80% less memory
https://unsloth.ai
Apache License 2.0
18.46k stars 1.29k forks source link

[TEMP FIX] Ollama / llama.cpp: cannot find tokenizer merges in model file [duplicate] #1062

Open avvRobertoAlma opened 1 month ago

avvRobertoAlma commented 1 month ago

Hi, i tried finetuning both llama 3.1-8b-instruct and llama 3-8b-instruct following the notebook you provided here.

The training phase completed without errors and i generated the gguf quantized at 8-bit.

However i cannot load the gguf in LLM Studio for this error:

"llama.cpp error: 'error loading model vocabulary: cannot find tokenizer merges in model file\n'"

Did you have this kind of problem ?

I finetuned with success both mistral-instruct and mistral-small-instruct without problems. I'm experiencing issues only with llama

DiLiuNEUexpresscompany commented 1 month ago

The interface on Colab looks correct, and I can successfully import the guff file in Jan, but I'm unable to use the model for generation. image image

danielhanchen commented 1 month ago

Oh yep the issue is on transformers 4.45 - I'm communicating with them to fix the problem

laoc81 commented 1 month ago

my version in notebook -> transformers 4.44.2 (the same as the last week, when was working), but I had the same problem.

jwhitehorn commented 1 month ago

If you're installing Unsloth right from git like !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" it will just install the latest version of transformers as it's specifically requiring 4.45.

Here is how I worked around this bug:

First I manually installed Transformers:

!pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" # https://github.com/unslothai/unsloth/issues/1062

And then I installed Unsloth using the commit hash for the September 2024 release: !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@fb77505f8429566f5d21d6ea5318c342e8a67991" # Version: September-2024

The September 24 release of Unsloth is only requiring Transformers 4.44, so it does not attempt to upgrade the installation of Transformers installed in the first step.

avvRobertoAlma commented 1 month ago

my version in notebook -> transformers 4.44.2 (the same as the last week, when was working), but I had the same problem.

If you're installing Unsloth right from git like !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git" it will just install the latest version of transformers as it's specifically requiring 4.45.

Here is how I worked around this bug:

First I manually installed Transformers:

!pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" # https://github.com/unslothai/unsloth/issues/1062

And then I installed Unsloth using the commit hash for the September 2024 release: !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@fb77505f8429566f5d21d6ea5318c342e8a67991" # Version: September-2024

The September 24 release of Unsloth is only requiring Transformers 4.44, so it does not attempt to upgrade the installation of Transformers installed in the first step.

But in this case you need to perform a new training or you only have to regenerate gguf?

xmaayy commented 1 month ago

But in this case you need to perform a new training or you only have to regenerate gguf?

It looks like its a problem with the tokenizer thats being exported by unsloth :(

---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
[<ipython-input-4-e8d6395b404e>](https://localhost:8080/#) in <cell line: 8>()
      6 load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
      7 
----> 8 model, _tokenizer = FastLanguageModel.from_pretrained(
      9     model_name = "simmo/llama3.2-pyfim-3b",
     10     max_seq_length = max_seq_length,

7 frames
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in __init__(self, *args, **kwargs)
    113         elif fast_tokenizer_file is not None and not from_slow:
    114             # We have a serialization from tokenizers which let us directly build the backend
--> 115             fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file)
    116         elif slow_tokenizer is not None:
    117             # We need to convert a slow tokenizer to build the backend

Exception: data did not match any variant of untagged enum ModelWrapper at line 1251003 column 3
Mukunda-Gogoi commented 1 month ago

Is there any workaround for this ? I am blocked. It was working fine just 3 days ago.

nullnuller commented 1 month ago

pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@fb77505f8429566f5d21d6ea5318c342e8a67991"

I also tried your suggestion but it still fails

./llama.cpp/llama-cli -m model/unsloth.Q8_0.gguf -p "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request" -cnv
build: 3830 (b5de3b74) with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_loader: loaded meta data with 28 key-value pairs and 292 tensors from model/unsloth.Q8_0.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = Meta Llama 3.1 8b Bnb 4bit
llama_model_loader: - kv   3:                       general.organization str              = Unsloth
llama_model_loader: - kv   4:                           general.finetune str              = bnb-4bit
llama_model_loader: - kv   5:                           general.basename str              = meta-llama-3.1
llama_model_loader: - kv   6:                         general.size_label str              = 8B
llama_model_loader: - kv   7:                          llama.block_count u32              = 32
llama_model_loader: - kv   8:                       llama.context_length u32              = 131072
llama_model_loader: - kv   9:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv  10:                  llama.feed_forward_length u32              = 14336
llama_model_loader: - kv  11:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv  12:              llama.attention.head_count_kv u32              = 8
llama_model_loader: - kv  13:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  14:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  15:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  16:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  17:                          general.file_type u32              = 7
llama_model_loader: - kv  18:                           llama.vocab_size u32              = 128256
llama_model_loader: - kv  19:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  20:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  21:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  22:                      tokenizer.ggml.tokens arr[str,128256]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  23:                  tokenizer.ggml.token_type arr[i32,128256]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  24:                tokenizer.ggml.bos_token_id u32              = 128000
llama_model_loader: - kv  25:                tokenizer.ggml.eos_token_id u32              = 128001
llama_model_loader: - kv  26:            tokenizer.ggml.padding_token_id u32              = 128004
llama_model_loader: - kv  27:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   66 tensors
llama_model_loader: - type q8_0:  226 tensors
llama_model_load: error loading model: error loading model vocabulary: cannot find tokenizer merges in model file

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: failed to load model 'model/unsloth.Q8_0.gguf'
main: error: unable to load model

Does the merged F16 version work, either raw or gguf?

jwhitehorn commented 1 month ago

Does the merged F16 version work, either raw or gguf?

My workflow involves saving the model using model.save_pretrained_merged("drive/MyDrive/my/path/here/", tokenizer, save_method = "merged_16bit",) and the using convert_hf_to_gguf.py on my Mac to generate the GGUF (first 16-bit then any quantizations I want) from the safe tensors created by Unsloth.

This has been working fine for me with my workaround to lock down the version of Unsloth to the September release.

Do be sure you're also manually installing Transformers, and specifying version 4.44.2. Otherwise if you just pip install transformers or let Unsloth install it for you, you'll end up with the latest version which is the root cause of this error.

EDIT:

If it helps, here is the full text of the first cell of my notebook:

%%capture
# Mount Google Drive for larger data sets / output
from google.colab import drive
drive.mount('/content/drive')

!pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" # https://github.com/unslothai/unsloth/issues/1062
!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@fb77505f8429566f5d21d6ea5318c342e8a67991" # Version: September-2024
!pip install --no-deps xformers trl peft accelerate bitsandbytes triton

It's the combination of manually installing Transformers and installing Unsloth from a specific hash, that are the keys to working around this for the time being.

nullnuller commented 1 month ago

%%capture

Mount Google Drive for larger data sets / output

from google.colab import drive drive.mount('/content/drive')

!pip install --upgrade --force-reinstall "transformers==4.44.2" "numpy==2.0.2" # https://github.com/unslothai/unsloth/issues/1062 !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@fb77505f8429566f5d21d6ea5318c342e8a67991" # Version: September-2024 !pip install --no-deps xformers trl peft accelerate bitsandbytes triton

Thanks, this worked. I hope they fix it properly.

danielhanchen commented 1 month ago

Extreme apologies on the delay - was out for a few days - will get to the bottom of this and fix this asap - apologies again!

Mukunda-Gogoi commented 1 month ago

@danielhanchen thank you. eagerly waiting for your fix to resume my work. <3

danielhanchen commented 1 month ago

So it seems #1065 and this are identical as well - I will update both threads

danielhanchen commented 1 month ago

I just communicated with the Hugging Face team - they will upstream updates to llama.cpp later in the week. It seems like tokenizers>=0.20.0 is the culprit.

I re-uploaded all Llama-3.2 models and as a temporary fix, Unsloth will use transformers==4.44.2.

Please try again and see if it works! This unfortunately means you need to re-finetune the model if you did not save the 16bit merged weights or LoRAs. Extreme apologies if you did, update unsloth then reload them and save them to GGUF.

Update Unsloth via:

pip uninstall unsloth -y
pip install --upgrade --no-cache-dir "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"

I will update everyone once the Hugging Face team resolves the issue! Sorry again!

Pinging: @jwhitehorn @xmaayy @avvRobertoAlma @nullnuller @DiLiuNEUexpresscompany @laoc81 @Mukunda-Gogoi

drsanta-1337 commented 1 month ago

thanks @danielhanchen ! let me try it out!

danielhanchen commented 1 month ago

@drsanta-1337 If you already updated - you might have to do it again sorry!! I just pushed changes into main

drsanta-1337 commented 1 month ago

no probs!

LysandreJik commented 1 month ago

Thanks @danielhanchen, and sorry for the disturbances; to give the context as to what is happening here, we updated the format of merges serialization in tokenizers to be much more flexible (this was done in this commit):

image

The change was done to be backwards-compatible : tokenizers and all libraries that depend on it will keep the ability to load merge files which were serialized in the old way.

However, it could not be forwards-compatible: if a file is serialized with the new format, older versions of tokenizers will not be able to load it.

This is why we're seeing this issue: new files are serialized using the new version, and these files are not loadable in llama.cpp, yet. We're updating all other codepaths (namely llama.cpp) to adapt to the new version. Once that is shipped, all your trained checkpoints will be directly loadable as usual. We're working with llama.cpp to ship this as fast as possible.

Thank you!

Issue tracker in llama.cpp: https://github.com/ggerganov/llama.cpp/issues/9692

drsanta-1337 commented 1 month ago

@danielhanchen The fix is working now thanks, I fine-tuned, generated Q5_KM GGUF of llama-3.2-1B-Instruct and ran it using ollama

thanks a lot I'm unblocked now!