Open nondefo opened 2 years ago
Place the contents of the directory "stanford-corenlp-full-2018-10-05", extracted from the parser downloaded in Step 7, into ./rule_based/parser/
Hi there,
Thanks for the quick response. The server is running. But now I am this command on a separate terminal:
python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json
And I get this response:
**Initializing Parallel Triple Extraction. Loading dependencies and dataset...
Traceback (most recent call last):
File "./learning_based/paralleloie.py", line 35, in
from allennlp.predictors.predictor import Predictor
File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/init.py", line 9, in
from allennlp.predictors.predictor import Predictor
File "/usr/local/lib/python3.7/site-packages/allennlp/predictors/predictor.py", line 12, in
from allennlp.data import DatasetReader, Instance
File "/usr/local/lib/python3.7/site-packages/allennlp/data/init.py", line 1, in
from allennlp.data.dataset_readers.dataset_reader import DatasetReader
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/init.py", line 10, in
from allennlp.data.dataset_readers.ccgbank import CcgBankDatasetReader
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/ccgbank.py", line 9, in
from allennlp.data.dataset_readers.dataset_reader import DatasetReader
File "/usr/local/lib/python3.7/site-packages/allennlp/data/dataset_readers/dataset_reader.py", line 8, in
from allennlp.data.instance import Instance
File "/usr/local/lib/python3.7/site-packages/allennlp/data/instance.py", line 3, in
from allennlp.data.fields.field import DataArray, Field
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/init.py", line 7, in
from allennlp.data.fields.array_field import ArrayField
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 10, in
class ArrayField(Field[numpy.ndarray]):
File "/usr/local/lib/python3.7/site-packages/allennlp/data/fields/array_field.py", line 50, in ArrayField
@overrides
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 88, in overrides
return _overrides(method, check_signature, check_at_runtime)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 114, in _overrides
_validate_method(method, super_class, check_signature)
File "/usr/local/lib/python3.7/site-packages/overrides/overrides.py", line 135, in _validate_method
ensure_signature_is_compatible(super_method, method, is_static)
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 93, in ensure_signature_is_compatible
ensure_return_type_compatibility(super_type_hints, sub_type_hints, method_name)
File "/usr/local/lib/python3.7/site-packages/overrides/signature.py", line 288, in ensure_return_type_compatibility
f"{method_name}: return type `{sub_return}` is not a `{super_return}`."
TypeError: ArrayField.empty_field: return type None
is not a <class 'allennlp.data.fields.field.Field'>
.**
It doesn't seem like anything from my side. Could you let me know what the issue could be?
Try:
pip install overrides==3.1.0
For this command:
python3 ./learning_based/paralleloie.py -i data/pubmedabstracts.json
I eventually get an error:
Initializing Parallel Triple Extraction. Loading dependencies and dataset...
Done
Coreference resolution in progress...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 38869/38869 [3:11:42<00:00, 3.38it/s]
Done
Triple extraction in progress...
0it [00:00, ?it/s]
Traceback (most recent call last):
File "./learning_based/paralleloie.py", line 130, in
Resource punkt not found. Please use the NLTK Downloader to obtain the resource:
import nltk nltk.download('punkt')
For more information see: https://www.nltk.org/data.html
Attempted to load tokenizers/punkt/PY3/english.pickle
Searched in:
$ python3
>>> import nltk
>>> nltk.download("punkt")
Thanks!
The final command: python3 ./rule_based/extract_refine.py -i extracted_triples_learning.csv
Returns:
[nltk_data] Downloading package stopwords to /Users/nony/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Traceback (most recent call last):
File "./rule_based/extract_refine.py", line 361, in
Change line 361 of ./rule_based/extract_refine.py to:
inp = ap.parse_args().infile
Still getting an error :
[nltk_data] Downloading package stopwords to /Users/nony/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
Traceback (most recent call last):
File "./rule_based/extract_refine.py", line 366, in
Ok, the triples have been extracted, thanks.
Error: Could not find or load main class edu.stanford.nlp.pipeline.StanfordCoreNLPServer Caused by: java.lang.ClassNotFoundException: edu.stanford.nlp.pipeline.StanfordCoreNLPServer . . . when trying to run command: java -mx6g -cp "./rule_based/parser/*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 10000 -timeout 30000