I am using Stanford CoreNLP version 1.3.3 for the dependency parsing as par the documentation. However, when I run the 4 samples presented in the test_documents.py, I get a bit different parse trees in results and therefore a bit different politeness scores. Below is my output file with associated parse trees.
testoutput.xlsx
I manually checked the output xml files that the stanford tool gives me and it's the parse trees(any type of dependency parses) that I get different from the sample.
What is the problem here?
Even if I continue with the parse trees that I'm generating from the tools with my other inputs, will I get drastically different results? Or that is okay?
Hi,
I am using Stanford CoreNLP version 1.3.3 for the dependency parsing as par the documentation. However, when I run the 4 samples presented in the test_documents.py, I get a bit different parse trees in results and therefore a bit different politeness scores. Below is my output file with associated parse trees. testoutput.xlsx
I manually checked the output xml files that the stanford tool gives me and it's the parse trees(any type of dependency parses) that I get different from the sample.
What is the problem here?
Even if I continue with the parse trees that I'm generating from the tools with my other inputs, will I get drastically different results? Or that is okay?
@sudhof @cristiandnm