Closed flipz357 closed 9 months ago
Hello Juri
Those are sensible questions!
a) the broader research that this was conducted in was about multilingual models and the comparison with monolingual models in data scarce scenarios. For fair comparisons we used the same base model for all cases. We are well aware that for English that does not lead to a SOTA model, but that was not our intent in our research. I think the AMRBART people have an English model also on hugging face, which probably performs better, but I'm not sure.
b) a paper has indeed been submitted but not published yet!
Bram
Thanks Bram!
a) Thanks for the explanation! In the future I'll be more interested in the multilingual setting, so your mBART parsers may be very valuable anyway. Still, just concerning the English, if I had the choice I'd use the enBART for English parsing, because I'd guess it's may be a tiny bit better.
The parsers from the AMRBART people that I found unfortunately do not work in the api, they will throw errors about certain tokenizers were not found.
b) nice, best of luck with the submission.
I clarfied with the AMRBART folks that API inference doesn't work.
So all the props to your project 🥳! First time I can access a parser so simply. amrlib and AMRBART are simple too, but still are not as widely useable as your project, they cannot be queried simply from an api.
@flipz357 Hm. You'll indeed get results from the API but our custom tokenizer als provides some special functionality to postprocess the generated data. I do not know how good the quality is of what you'll get without that postprocessing.
@BramVanroy Yes, I know, but that wasn't a problem. I just copied/adapted the code for this from your repo...
(just to add: the post-processing script is also very light and can be run on any machine -- which isn't true for the parsing inference, that's the costly part of the pipeline and hence it's great to have it accessible with api)
....but (sorry for spamming this thread)... for beginners and not so experienced programmers as well as teaching purposes and whatever, it could actually make sense to have an api that also includes post-processing. @BramVanroy Do you know if this could be accomplished whithin huggingface, or is there no way around setting up an own server?
Hm, I have not used the HF inference points but it would seem that you could add a custom container so I suppose that theoretically it is possible.
I see, thank you. If I might have some time sometime I will check this out. So for now the question on most easy parsing for everyone is still open, although really your project makes a good step towards it.
I guess you can close this issue, since both questions are kind of resolved.
Hi Bram,
using your nice work already for some experiments. Especially appreciate the access via huggingface api which lets me generate parses on my ancient notebook!
I have two questions:
a) What is the reasoning for using multilinugal BART for en-parsing. I get that it makes sense to use mBART for non-EN, but wouldn't be english BART better for this? I noticed that while the mBART is generally good for English but here and there makes a few silly mistakes from which I think they would occur fewer in the english model.
b) is there any paper related to your parsers?