TheophileBlard / french-sentiment-analysis-with-bert

How good is BERT ? Comparing BERT to other state-of-the-art approaches on a French sentiment analysis dataset
MIT License
146 stars 35 forks source link

Runnable PyTorch Version of the Code #10

Closed cemrifki closed 2 years ago

cemrifki commented 3 years ago

Hi. Is there a PyTorch version of this code repo? If so, how can I use it with the PyTorch library instead of the TensorFlow framework? Thanks in advance.

TheophileBlard commented 3 years ago

Hi @cemrifki and thank you for this nice issue. You're referring to the training code, right ? Do you prefer "raw" PyTorch or code that use high level wrappers (such as transformers Trainer) ?

cemrifki commented 3 years ago

Hi, Théophile. I would like to use high level wrappers. For example, I would like to create the following model, calling a PyTorch wrapper, method, or constructor:

model = TFAutoModelForSequenceClassification.from_pretrained("tblard/tf-allocine")

Thanks again.

TheophileBlard commented 2 years ago

For future readers, it seems that someone finally released a PyTorch version of the model! You can use it with the following code:

from transformers import pipeline

analyzer = pipeline(
    task='text-classification',
    model="philschmid/pt-tblard-tf-allocine",
    tokenizer="philschmid/pt-tblard-tf-allocine"
)

result = analyzer("Le munster est bien bien meilleur que le camembert !")
print(result) # [{'label': 'POSITIVE', 'score': 0.876563549041748}]

I tested a few prompts, and the results seem consistent, event tough they are not perfectly identical to the TF version.