Closed ArneBinder closed 3 years ago
Fixed in the PR #3.
Because for random
encoder, a non-learnable (weight-fixed) linear embedding layer is used which only requires tokenizer from a transformer to provide tokenization as its input. And dataset_processor: transformer
does not provide a valid tokenizer compared to all the other choices.
Btw, to make it consistent when it comes to the results from random
, always set bert
(BertTokenizer
) as the tokenizer throughout our experiments.
When the project is intially set up (git clone, create environment, pip install), the command
python evaluate.py --help
fails with:Interestingly, the command works if another one like
python evaluate.py dataset=conll2003 dataset_processor=bert encoder=bert evaluation/dataset=nway_kshot_5_1
was successfully called at least once.