dccuchile / beto

BETO - Spanish version of the BERT model
Creative Commons Attribution 4.0 International
492 stars 63 forks source link

What did you do with accents? #7

Closed avacaondata closed 4 years ago

avacaondata commented 4 years ago

Hi again, and thanks in advance for your response. We are questioning ourselves how you dealt with accents, which are very important in spanish (e.g. hacia is not the same as hacía). When using your tokenizer with transformers library, it seems that both words (and other example of pairs of this type) have the same id in the embedding and therefore have the same vector representation.

josecannete commented 4 years ago

Hello @alexvaca0,

Good question! I have two things to say about it:

  1. You're right, transformers library don't deal with accents in the form we expect. I did a PR to face that problem (https://github.com/huggingface/transformers/pull/2333) that works but it probably needs a major refactor.
  2. But, even though we included accents when constructing the vocabulary, we didn't in the step of create the pretraining data. That is a serious problem because BETO (uncased) never saw accents in training.

We are evaluating the way to attend that problem. For now I recommend you to only use BETO-cased that doesn't have this problem or to try BETO-uncased (tokenizing with accents using the PR) only when your fine-tuning task has a big dataset of train.

avacaondata commented 4 years ago

First of all, thank you very much for your quick response, I really appreciate that. Then, if we use the code of that pull request the tokenizer won't remove accents when tokenizing new texts? We are currently using BETO-cased, which, as I understood from your answer, was trained with texts in which accents were included, am I right?

josecannete commented 4 years ago

If you are working with BETO-Cased you don't need the code in the PR. The following code should work:

tokenizer = BertTokenizer.from_pretrained("dccuchile/cased", do_lower_case=False) tokenizer.tokenize('Hola, Qué pasó aquí compañero?')

['Hola', ',', 'Qué', 'pasó', 'aquí', 'compañero', '?']

avacaondata commented 4 years ago

We are accessing your model through AutoTokenizer, AutoModel etc; is there any difference compared to calling it through BertTokenizer, BertModel etc?

avacaondata commented 4 years ago

Maybe it has to do with the version of transformers; which one are you using?

josecannete commented 4 years ago

Hi again,

I tried the following code in version 2.4.1

tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-cased") tokenizer.tokenize('Hola, Qué pasó aquí compañero?')

And worked as expected.

pd: the following won't work because using AutoTokenizer and AutoModels depends on the name of the model. The model in this case should have 'bert' in it and its not the case with 'dccuchile/cased'.

tokenizer = AutoTokenizer.from_pretrained("dccuchile/cased") tokenizer.tokenize('Hola, Qué pasó aquí compañero?')

ahtesham33 commented 3 years ago

Hey how can i use this model? Do i have to download the cased file and the work exact same as bert works by loading the file i already downloaded?