Open headacheboy opened 4 years ago
Hello,
We did not do any additional operations in pre-processing and post-processing.
Notice that:
python2 postprocess_AMRs.py -f sent.amr -s sent
Good Luck! xdqkid
Hi,
When postprocessing, I use the latest version of RikVN/AMR, and the api to get wiki label has been changed. I'm wondering whether these decrease the performance of model...
Hi, Thank you for your reminder! When we do post-processing, we do some processing on the wiki. We store the <name, wiki> dictionary in the training set and prefer to use this dictionary rather than default wiki get_wiki_from_spotlight_by_name. This may have some influence on the result.
I'm sorry that it has been a long time since I modified the wiki, and I almost forgot my modification .
BTW,
Hi,
I get 81.0 now. But I am unable to get the best result 81.4...
Could you provide your modification of post-processing and the <name, wiki> dictionary on AMR2.0?
Thank you!
Hi,
Thanks for your advice. I'm too busy recently, e.g. looking for a job, preparing for dissertation, building websites for CCMT2020&AACL2020. I will probably find a time to push an update that contains full codes later this or next year .
Cheers
Hi
When parsing on AMR 2.0 with the model PTM-MT(WMT14B)-SemPar(WMT14M), we can only get 80.8 instead of 81.4 as you mentioned in your paper.
We're wondering whether there is anything important when preprocessing and postprocessing on AMR 2.0? Could you provide more details about it?
Thank you!