Open anthonywchen opened 3 years ago
To answer your specific questions:
Do you or your co-authors think it make sense to put this model into the allennlp-models
repo? It would be a bit of work on your part to make a pull request into -models
. It gives the code a more prominent place, and make it easier for others to build on it. But the requirements on code that lives in -models
are higher than they are for your own repo (For example, we would need tests.).
Short of that, does it make sense to implement an AllenNLP Metric
class based on your work? It would be the first Metric
we have that effectively runs a whole model, but from skimming the paper it sounds like this could be a valuable contribution to AllenNLP. I would be quite interested in getting this done.
Please email me at jonathanb@allenai.org if you need further help.
Hello Dirk and Jonathan,
Thanks for your responses! I apologize for my delayed response (EMNLP has taken up a lot of my bandwidth this week). To address the responses:
<Output>
section correct. Thanks Jonathan!I really like the idea of making this a part of the Metric
class. One of the things I've been wrestling with a learned metric is reproducibility/ease of use, and I feel like having an official model archive as well as making it easy to load into a Metric
class would go a long way towards this! I think our first priority is getting a demo for others to use, but I would love to follow up on getting a Metric class for this! Thanks for the suggestion!
Thanks again for all the help!
Best, Anthony
Hello, and thank you for hosting this awesome resource. My name is Anthony Chen and I have a EMNLP 2020 paper (coauthors are Matt Gardner & Gabriel Stanovsky) which presents a new trained reading comprehension evaluation metric, LERC, that I'd like to add to the Allennlp demo site.
The code base for this paper resides here. The model and dataset reader are in the
lerc
folder and I have a trainedmodel.tar.gz
file.I've begun the process of adding the UI and API components for adding LERC to the demo in my own fork here. Specifically, the files I have added/modified are:
ui/src/components/demos/EvaluateReadingComprehension.js
This has the UI for LERC.ui/src/models.js
I've updated this to load the previous file.api/allennlp_demo/lerc
I've copied over @nitishgupta NMN code since that also uses an external Git repo and modified it for LERC.With that said, I have the following questions and steps that I believe need to be completed and would appreciate any help that you could provide. I should also add that I haven't used Docker in a long time, and so I may have messed up some things in the API files.
<Output>
section inEvaluateReadingComprehension.js
. Right now, all I would like the output to do is to print out the inputs again, as well as printing out the result of the predictor call, but I'm not sure how to do this.api/allennlp_demo/lerc/
are written correctly. Any help you could provide in looking them over would be greatly appreciated!Please let me know if there's anything else I missed and thank you for hosting this great resource!
Best, Anthony