Open panda0881 opened 5 years ago
I think the major issue here is that the neural coref does not work with dependency parses only (it was not trained to work with dependency based mention detection), and you have to run the parse
annotator (which is unfortunately slow). It may be sufficient to make that change in your annotators list (instead of just having 'annotators': 'coref'
change to 'annotators': 'tokenize,ssplit,pos,lemma,ner,parse,coref'
.
You can go to the Python library's instructions and just start the server independent of your Python process the way it says to run an existing server.
Add -serverProperties neural-coref.props
to the Java command.
Put these settings in neural-coref.props
:
annotators = tokenize, ssplit, pos, lemma, ner, parse, coref
coref.algorithm = neural
Then call with no special properties in the request.
There are of course numerous ways to resolve this situation.
Also, I do think you've uncovered a bug with dependency mention detection models we are releasing not being updated to work with current code. I'll try to add clearer documentation about what options are possible with the coref annotators.
Thanks for the reply. Calling the server with a python API is still not working, but I successfully solved that problem by firstly creating a bunch of documents and then directly use the java command to parse them. So I think the problem might be with the support of the server to the coreference package rather than the coreference package itself.
I was trying to use the coreference models as baselines. It went well with the 'dcoref' and 'statistical' coref model. But when I tried to use the 'neural' model, I received this bug:
For your information, I am using the latest version (3.9.2) and I'm using the stanfordCoreNLP(https://github.com/Lynten/stanford-corenlp) to call the server.
Can anyone help? Thank you very much!