Digital-Defiance / nlp-metaformer

An ablation study on the transformer network for Natural Language Processing
3 stars 0 forks source link

experiment: pre-trained embeddings (bard uncased) #63

Closed RuiFilipeCampos closed 5 months ago

RuiFilipeCampos commented 5 months ago

SOTA performance on this dataset is around 60-65% accuracy, having reached 56% accuracy with small models I consider this a success, it is time to move on.