Closed jinyiyang-jhu closed 3 years ago
Hi, the references are one-indexed. You can try the command:
aer.py ${reference_path} ${file_path} --oneRef --fAlpha 0.5
Hi, the references are one-indexed. You can try the command:
aer.py ${reference_path} ${file_path} --oneRef --fAlpha 0.5
Thanks!
Hi, I'm new to this toolkit and trying to run a simple test with your pretrained model in the table (the last row: "Ours (multilingually fine-tuned w/ --train_co, softmax)"). I use this model as following:
python tools/aer.py examples/roen.gold examples/roen.awesome-align.out
the results I got is: examples/roen.awesome-align.out: 59.5% (45.3%/36.6%/5014), F-Measure: 0.405. For the other language pairs, I got "En-Fr" -> 42.9%, "Ja-En"->81.3%, "Zh-En"-> 69.0%, which are all much worse comparing to the numbers reported in your table. I tried to run the commandawesome-align
with *.src-tgt data under "examples/" and the same model, as guided in the README page, the results are similar (or slightly worse than) the ones shown above. Could you let me know what the issue might be?