nlp-with-transformers / notebooks

Jupyter notebooks for the Natural Language Processing with Transformers book
https://transformersbook.com/
Apache License 2.0
3.7k stars 1.13k forks source link

Chapter 4 - Inference Error. Pinpointed error, however don't know how to solve #141

Open Ice-Citron opened 1 week ago

Ice-Citron commented 1 week ago

Information

The problem arises in chapter:

Describe the bug

The error stems from that if you run the exemplar colab code, and push it to your huggingface hub and try to run it. When trying to run inference, it outputs the error: "Can't load tokenizer using from_pretrained, please update its configuration: tokenizers.AddedToken() got multiple values for keyword argument 'special'".

To Reproduce

Steps to reproduce the behavior:

  1. Run the exemplar colab code (https://colab.research.google.com/github/nlp-with-transformers/notebooks/blob/main/04_multilingual-ner.ipynb) up to: Screenshot 2024-06-27 at 11 16 59 PM

  2. Then visit the Huggingface hub once the model is trained and pushed to the hub, and use your personal inference API, and this tokenizer error can be seen: Screenshot 2024-06-27 at 11 17 38 PM

  3. Same also apply if you try to import the model off it, the error occurs at the tokenizer stage, and more precisely I believe at "special_tokens_map.json". Screenshot 2024-06-27 at 11 18 28 PM

  4. However, this seems to be subvertable if I instead just pass in the special token of "mask_token" as an extra kwargs, as per recommended by GPT-4 Screenshot 2024-06-27 at 11 20 17 PM

from transformers import AutoTokenizer, AutoModelForTokenClassification

# Manually specify special tokens if the default configuration is problematic
special_tokens_dict = {
    "mask_token": {
    "content": "<mask>",
    "single_word": False,
    "lstrip": True,
    "rstrip": False,
    "normalized": True,
    "special": True,  
    "__type": "AddedToken"
}
}
tokenizer = AutoTokenizer.from_pretrained("shng2025/xlm-roberta-base-finetuned-panx-de", use_fast=True, **special_tokens_dict)
model = AutoModelForTokenClassification.from_pretrained("shng2025/xlm-roberta-base-finetuned-panx-de") 

Expected behavior

I was expecting that once I fine-tuned the model by running the exemplar code and pushed it to the hub. That the model can be easily ran from the Inference API. Can also try and check my code on my personal notebook: https://colab.research.google.com/drive/1F5L_vL1o6WC3DxGWDF_g6ZPKTJ7dcmxR#scrollTo=orgQubxKVrNX

However, the same error occured when I ran it using the exemplar code directly, so I think it's likely due to some changes made with the library after this book was published causing this? it's still runnable as mentioned if I passed in "mask_token" as a **kwarg. But this is very strange, and I would love to know what's causing this error, as I am still learning, etc.

Ice-Citron commented 1 week ago

Very funny. Actually I managed to get it working now, _I simply deleted "mask_token" from special_tokenfile and it worked. Still not sure why it worked before and why it doesn't now. If possible could someone try and point out to me regarding the config changes, etc. But also best for this code to be changed too.

Screenshot 2024-06-27 at 11 35 24 PM Screenshot 2024-06-27 at 11 35 35 PM

Ice-Citron commented 1 week ago

Did further digging. I checked others who trained this too on HuggingFace, especially ones that were created within the last week. Those who ran in Tensorflow seems to still be able to run inference automatically. However, for people who used PyTorch like me, simillar issue were faced. Hence I believe this error is more PyTorch only for now, and will try and resolve and make a pull request soon when debugged.

Screenshot 2024-06-27 at 11 56 02 PM

Ice-Citron commented 1 week ago

Sorry. I think I found the error. It's from now updating my libraries. I will try and update and retrain the models (and spend 0.6 usd of compute credits) and test this hypothesis again tmr.

Ice-Citron commented 1 week ago

I managed to get it to work. basically the install file on colab (idk if ur using colab) is faulty, or if you can call it that. basically it's installation requirement is installing older versions, when newer ones exists. And the ones that works are the newer libraries. You ca try and run this after running the default installation file:

#%%capture
!pip install transformers==4.41.2
!pip install datasets==2.20.0

!pip install pyarrow==16.0
!pip install requests==2.32.3

!pip install torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0

!pip install importlib-metadata

!pip install accelerate -U

And you can mainly just try and refer to my file too: https://colab.research.google.com/drive/1F5L_vL1o6WC3DxGWDF_g6ZPKTJ7dcmxR#scrollTo=r1SReYWcdRjZ

Ice-Citron commented 1 week ago

I will leave this here if anyone encounters the same bug. The bug is caused by this code block:

!git clone https://github.com/nlp-with-transformers/notebooks.git
%cd notebooks
from install import *
install_requirements()

Because it would install older depracated versions of libraries, and cause these bugs. My hypothesis at least.

Ice-Citron commented 1 week ago

To the authors, you should try and fix the requirement.txt and install.py to get them up to date. That's all for now.