This repository reproduces the experimental result of CLRCMD (pronounced as "clear command") reported in the paper to be appeared in ACL 2022 main track.
We want to upload our checkpoint to model registry such as huggingface hub to make them easily accessible, but due to the complicated process, we decided to just manually upload the checkpoint to our gdrive.
Please visit this link to download the checkpoints we used in our experiment.
We assume the pytorch_model.bin
and model_args.json
is located in /home/username/checkpoints/bert-rcmd/
We assume that the user uses anaconda environment.
conda create -n clrcmd python=3.8
conda activate clrcmd
pip install -r requirements.txt
python setup.py develop
We download the benchmark dataset using the script provided by SimCSE repository.
bash examples/download_sts.sh
tokenizer.sed
: Tokenizer script used in download_sts.bash
We create a script for downloading iSTS benchmarks.
bash examples/download_ists.sh
STSint.testinput.answers-students.sent1.chunk.txt
a closed path
to a closed path.
has no gaps
to [ has no gaps ]
is in a closed path,
to [ is in a closed path, ]
is in a closed path.
to [ is in a closed path. ]
STSint.testinput.answers-students.sent1.txt
battery terminal
to battery terminal
switch z, that
to switch z, that
STSint.testinput.answers-students.sent2.chunk.txt
are not separated by the gap
to [ are not separated by the gap ]
are
to [ are ]
in closed paths
to [ in closed path ]
We download the training dataset using the script provided by SimCSE repository.
bash examples/download_nli.bash
# Help message
python -m examples.run_evaluate_sts -h
# One example
python -m examples.run_evaluate_sts --data-dir data --model bert-rcmd
python -m examples.run_train --data-dir data --model bert-rcmd
python -m examples.run_evaluate_sts --data-dir data --model bert-rcmd --checkpoint /home/username/checkpoints/bert-rcmd
# Filter out the alignments which has low score
python -m examples.run_preprocess_ists --alignment-path data/ISTS/test_goldStandard/STSint.testinput.images.wa
# Bert-avg
python -m examples.run_evaluate_ists --data-dir data/ISTS/test_goldStandard/ --source images --checkpoint-dir checkpoints/bert-avg/
./data/ISTS/test_goldStandard/evalF1.pl ./data/ISTS/test_goldStandard/STSint.testinput.images.wa.equi ./checkpoints/bert-avg/images.wa
# Bert-Clrcmd
python -m examples.run_evaluate_ists --data-dir data/ISTS/test_goldStandard/ --source images --checkpoint-dir checkpoints/bert-rcmd/
./data/ISTS/test_goldStandard/evalF1.pl ./data/ISTS/test_goldStandard/STSint.testinput.images.wa.equi ./checkpoints/bert-rcmd/images.wa
checkpoint | sts12 | sts13 | sts14 | sts15 | sts16 | stsb | sickr | avg |
---|---|---|---|---|---|---|---|---|
bert-rcmd |
0.7523 | 0.8506 | 0.8099 | 0.8626 | 0.8150 | 0.8521 | 0.8049 | 0.8211 |