webis-de / small-text

Active Learning for Text Classification in Python
https://small-text.readthedocs.io/
MIT License
595 stars 65 forks source link

[Question] Are these results supposed to be different? #70

Open 1nuno opened 6 days ago

1nuno commented 6 days ago

Down below a show a relevant part of a simple notebook I have to fine-tune a BERT model using the transformers library:

df = pd.read_csv("Puntuguese.csv")
train_df, test_df = train_test_split(df, test_size=0.3, random_state=42)
train_dataset = Dataset.from_pandas(train_df)
test_dataset = Dataset.from_pandas(test_df)

# Train
models = {
    "BERT-multilingual": "google-bert/bert-base-multilingual-uncased",
    # "BERTimbau": "neuralmind/bert-base-portuguese-cased"
}
for model_name, model_path in models.items():

    tokenizer = AutoTokenizer.from_pretrained(model_path)

    def tokenize_map(x):
        return tokenizer(
            x["text"], padding="max_length", truncation=True, max_length=231
        )

    train_dataset = test_dataset.map(tokenize_map, batched=True)
    test_dataset = test_dataset.map(tokenize_map, batched=True)

    model = AutoModelForSequenceClassification.from_pretrained(model_path, num_labels=2)

    train_dataset.set_format(
        type="torch", columns=["input_ids", "attention_mask", "label"]
    )
    test_dataset.set_format(
        type="torch", columns=["input_ids", "attention_mask", "label"]
    )

    def compute_metrics(p):
        preds = p.predictions.argmax(axis=-1)
        labels = p.label_ids
        precision, recall, f1, _ = precision_recall_fscore_support(
            labels, preds, average="binary"
        )
        accuracy = accuracy_score(labels, preds)
        return {
            "accuracy": accuracy,
            "precision": precision,
            "recall": recall,
            "f1": f1,
        }

    # Training arguments
    training_args = TrainingArguments(
        output_dir=f"./results-{model_name}",
        evaluation_strategy="epoch",
        learning_rate=2e-5,
        per_device_train_batch_size=16,
        per_device_eval_batch_size=16,
        num_train_epochs=3,
        logging_dir="./logs",
        logging_steps=10,
    )

    trainer = Trainer(
        model=model,
        args=training_args,
        train_dataset=train_dataset,
        eval_dataset=test_dataset,
        compute_metrics=compute_metrics,
    )

    trainer.train()

    # Save the model
    # model.save_pretrained(f"./{model_name}-finetuned")
    # tokenizer.save_pretrained(f"./{model_name}-finetuned")

    print(f"Training complete for model: {model_name}\n")

If I then run the following code:

print(trainer.evaluate())

I get:

{'eval_loss': 0.36417368054389954,
 'eval_accuracy': 0.8584795321637427,
 'eval_precision': 0.8909090909090909,
 'eval_recall': 0.8127962085308057,
 'eval_f1': 0.8500619578686494,
 'eval_runtime': 14.3473,
 'eval_samples_per_second': 119.186,
 'eval_steps_per_second': 7.458,
 'epoch': 3.0}

So far so good. My question arises when I try to do a similar training but this time using small-text library:

transformer_model_name = "google-bert/bert-base-multilingual-uncased"
num_classes = 2
target_labels = np.arange(num_classes)
tokenizer = AutoTokenizer.from_pretrained(transformer_model_name)
train = TransformersDataset.from_arrays(
    train_dataset["text"],
    train_dataset["label"],
    tokenizer,
    max_length=231,
    target_labels=target_labels,
)
test = TransformersDataset.from_arrays(
    test_dataset["text"],
    test_dataset["label"],
    tokenizer,
    max_length=231,
    target_labels=target_labels,
)
transformer_model = TransformerModelArguments(transformer_model_name)
clf_factory = TransformerBasedClassificationFactory(
    transformer_model,
    num_classes=2,
    kwargs=dict(
        {
            "lr": 2e-5,
            "mini_batch_size": 16,
            "num_epochs": 3,
            "device": "cuda",
        }
    ),
)
active_learner = PoolBasedActiveLearner(clf_factory, PredictionEntropy(), train)
active_learner.initialize_data(list(range(len(train.y))), train.y)

y_pred_train = active_learner.classifier.predict(train)
y_pred_test = active_learner.classifier.predict(test)

metrics = {
    "accuracy": accuracy_score(test.y, y_pred_test),
    "f1_score": f1_score(
        test.y, y_pred_test, average="weighted"
    ),  # Weighted for multi-class
    "precision": precision_score(
        test.y, y_pred_test, average="weighted"
    )}

print(f"Train accuracy: {accuracy_score(train.y, y_pred_train):.2f}")
print(f"Test accuracy: {metrics['accuracy']:.2f}")
print(f"F1-score: {metrics['f1_score']:.2f}")
print(f"Precision: {metrics['precision']:.2f}")

The results I get in this case are:

Train accuracy: 0.75
Test accuracy: 0.66
F1-score: 0.64
Precision: 0.69

My question is: "Why are the results so different?". It went from .85 accuracy to .66. What am I missing? I figured since the small-text library uses transformer underneath it would give a similar result. I did my best to keep the hyper parameters the same as well.

chschroeder commented 5 days ago

Hi @1nuno,

Not sure what is going wrong here, but if you want to use small-text only for text classification, you don't need a PoolBasedActiveLearner.

Hugging Face Trainer I am not that firm with the Hugging Face APIs, but it looks like you are "cheating" on the Hugging Face Side by using the test dataset as a validation dataset:

trainer = Trainer(..., eval_dataset=test_dataset, ....)

It would be interesting here to see the performance on the test set at the same time.

small-text If the error is on the small-text side, I would guess it could be the model selection that messes things up. Try to work with the classifier directly and disable the model selection: clf.fit(train_set, model_selection='none')

You are right that the resulting performances here should be similar. I have done this with BERT models and common benchmarks datasets in the past, so it should generally be possible.