keeeeenw / MicroLlama

Micro Llama is a small Llama based model with 300M parameters trained from scratch with $500 budget
Apache License 2.0
132 stars 6 forks source link

How was BERT evaluated? (expt setup) #2

Closed Axe-- closed 7 months ago

keeeeenw commented 7 months ago

I used this exact command for evaluation. https://github.com/keeeeenw/lm-evaluation-harness/blob/34a5c9efebdb771d70659c75c96e1c083e634f29/eval.sh#L25

Axe-- commented 7 months ago

Gotcha, what about finetuning BERT for this expt? (assuming no few-shot eval)

keeeeenw commented 7 months ago

I have not tried finetuning BERT. For apple to apple comparison, I used the settings for evaluating both my model and the BERT base model without any finetuning.

Axe-- commented 7 months ago

Ah! Got it. Had never explored zero-shot with BERT. Thanks!