nlpaueb / greek-bert

A Greek edition of BERT pre-trained language model
MIT License
142 stars 10 forks source link

Greek-Bert for binary text classification #2

Closed moikono closed 2 years ago

moikono commented 4 years ago

Hello,

Is it possible to use the greek-bert model for binary classification, because I cannot find any model parameters to specify the number of output labels.

Thank you

jkoutsikakis commented 4 years ago

Hello Mixali,

in order to use Greek Bert for binary classification, you need to adjust the dimension of the output layer accordingly.

For example:

import pytorch_wrapper.functional as pwF

from torch import nn
from transformers import AutoModel

class BinaryGreekBERTModel(nn.Module):

    def __init__(self, bert_model, dp):
        super(BinaryGreekBERTModel, self).__init__()
        self._bert_model = bert_model
        self._dp = nn.Dropout(dp)
        self._output_linear = nn.Linear(768, 1)

    def forward(self, text, text_len):
        attention_mask = pwF.create_mask_from_length(text_len, text.shape[1])
        return self._output_linear(self._dp(self._bert_model(text, attention_mask=attention_mask)[0][:, 0, :]))

gb_model = AutoModel.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
binary_gb_model = BinaryGreekBERTModel(bert_model, dp=0)
iliaschalkidis commented 4 years ago

Hi @MixalisOikonomopoulos,

The best option is to follow the implementation proposed above by @jkoutsikakis, because this implementation omits the pooler layer (an additional fully-connected layer between the [CLS] token and the classification layer, that we empirically found that decrease the classification performance in many cases.

Although, if you want a more "elegant" alternative, you may also use:

from transformers import AutoModelForSequenceClassification, AutoConfig

config = AutoConfig.from_pretrained('nlpaueb/bert-base-greek-uncased-v1')
config.num_labels = 2
gb_model = AutoModelForSequenceClassification.from_config(config=config)