Closed alammehwish closed 7 years ago
You forgot to include the Stanford CoreNLP package, which is a dependency of this project and used for tokenization (see README).
Thanks. I actually added them in the class path after reading other issues, I am getting the same error.
java -Xmx40g -cp libs/anna-3.3.jar:libs/stanford-corenlp-full-2015-12-09/stanford-corenlp-3.6.0-models.jar:libs/stanford-corenlp-full-2015-12-09/stanford-corenlp-3.6.0.jar:libs/stanford-corenlp-full-2015-12-09/stanford-corenlp-3.6.0-sources.jar:target/pathlstm.jar se.lth.cs.srl.CompletePipeline eng -lemma models/CoNLL2009-ST-English-ALL.anna-3.3.lemmatizer.model -tagger models/CoNLL2009-ST-English-ALL.anna-3.3.postagger.model -parser models/CoNLL2009-ST-English-ALL.anna-3.3.parser.model -srl models/srl-ACL2016-eng.model -tokenize -reranker -externalNNs -test sample.txt
at se.lth.cs.srl.preprocessor.tokenization.StanfordPTBTokenizer.tokenizeplus(StanfordPTBTokenizer.java:35) at se.lth.cs.srl.preprocessor.Preprocessor.tokenizeplus(Preprocessor.java:37) at se.lth.cs.srl.CompletePipeline.parse(CompletePipeline.java:73) at se.lth.cs.srl.CompletePipeline.parseNonSegmentedLineByLine(CompletePipeline.java:165) at se.lth.cs.srl.CompletePipeline.main(CompletePipeline.java:138)
I recompiled the StanfordPTBTokenizer class and exported a new jar file. Can you pull the latest version of pathlstm.jar and try again?
Thanks, it works now.
I tried directly using "pathlstm.jar" as I was unable to compile through "mvn compile". I am getting an error. Can you please tell me if I am doing something wrong as soon as possible.
java -Xmx40g -cp libs/anna-3.3.jar:target/pathlstm.jar se.lth.cs.srl.CompletePipeline eng -lemma models/CoNLL2009-ST-English-ALL.anna-3.3.lemmatizer.model -tagger models/CoNLL2009-ST-English-ALL.anna-3.3.postagger.model -parser models/CoNLL2009-ST-English-ALL.anna-3.3.parser.model -srl models/srl-ACL2016-eng.model -tokenize -reranker -externalNNs -test sample.txt
54.21.744 is2.data.ParametersFloat 121:read -> read parameters 134217727 not zero 296071 54.21.763 is2.data.Cluster 113: -> Read cluster with 0 words
54.21.764 is2.lemmatizer.Lemmatizer 192:readModel -> Loading data finished.
54.21.764 is2.lemmatizer.Lemmatizer 194:readModel -> number of params 134217727
54.21.765 is2.lemmatizer.Lemmatizer 195:readModel -> number of classes 92
54.26.6 is2.data.ParametersFloat 121:read -> read parameters 134217727 not zero 1613201
54.26.6 is2.data.Cluster 113: -> Read cluster with 0 words
54.26.7 is2.tag.Lexicon 103: -> Read lexicon with 0 words
54.26.7 is2.tag.Tagger 141:readModel -> Loading data finished.
54.26.55 is2.parser.Parser 188:readModel -> Reading data started
54.26.102 is2.data.Cluster 113: -> Read cluster with 0 words
54.31.336 is2.parser.ParametersFloat 101:read -> read parameters 134217727 not zero 19957525
54.31.336 is2.parser.Parser 201:readModel -> parsing -- li size 134217727
54.31.354 is2.parser.Parser 211:readModel -> Stacking false
54.31.355 is2.parser.Extractor 56:initStat -> mult (d4)
Used parser class is2.parser.Parser
Creation date 2012.11.02 14:33:53
Training data CoNLL2009-ST-English-ALL.txt.crossannotated
Iterations 10 Used sentences 10000000
Cluster null
54.31.361 is2.parser.Parser 240:readModel -> Reading data finnished
54.31.363 is2.parser.Extractor 56:initStat -> mult (d4)
Loading pipeline from models/srl-ACL2016-eng.model
Loading reranker from models/srl-ACL2016-eng.model
Writing corpus to out.txt...
Exception in thread "main" java.lang.Error: Unresolved compilation problems:
PTBTokenizer cannot be resolved to a type
Word cannot be resolved to a type
PTBTokenizer cannot be resolved
Word cannot be resolved to a type