Closed avacaondata closed 4 years ago
Hello @alexvaca0,
Good question! I have two things to say about it:
We are evaluating the way to attend that problem. For now I recommend you to only use BETO-cased that doesn't have this problem or to try BETO-uncased (tokenizing with accents using the PR) only when your fine-tuning task has a big dataset of train.
First of all, thank you very much for your quick response, I really appreciate that. Then, if we use the code of that pull request the tokenizer won't remove accents when tokenizing new texts? We are currently using BETO-cased, which, as I understood from your answer, was trained with texts in which accents were included, am I right?
If you are working with BETO-Cased you don't need the code in the PR. The following code should work:
tokenizer = BertTokenizer.from_pretrained("dccuchile/cased", do_lower_case=False) tokenizer.tokenize('Hola, Qué pasó aquí compañero?')
['Hola', ',', 'Qué', 'pasó', 'aquí', 'compañero', '?']
We are accessing your model through AutoTokenizer, AutoModel etc; is there any difference compared to calling it through BertTokenizer, BertModel etc?
Maybe it has to do with the version of transformers; which one are you using?
Hi again,
I tried the following code in version 2.4.1
tokenizer = AutoTokenizer.from_pretrained("dccuchile/bert-base-spanish-wwm-cased") tokenizer.tokenize('Hola, Qué pasó aquí compañero?')
And worked as expected.
pd: the following won't work because using AutoTokenizer and AutoModels depends on the name of the model. The model in this case should have 'bert' in it and its not the case with 'dccuchile/cased'.
tokenizer = AutoTokenizer.from_pretrained("dccuchile/cased")
tokenizer.tokenize('Hola, Qué pasó aquí compañero?')
Hey how can i use this model? Do i have to download the cased file and the work exact same as bert works by loading the file i already downloaded?
Hi again, and thanks in advance for your response. We are questioning ourselves how you dealt with accents, which are very important in spanish (e.g. hacia is not the same as hacía). When using your tokenizer with transformers library, it seems that both words (and other example of pairs of this type) have the same id in the embedding and therefore have the same vector representation.