MichSchli / GraphMask

Official implementation of GraphMask as presented in our paper Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking.
MIT License
39 stars 10 forks source link

Error for QA task. #1

Closed neolifer closed 3 years ago

neolifer commented 3 years ago

ConfigurationError Traceback (most recent call last)

in () 9 10 model_trainer = ModelTrainer(configuration, gpu=args.gpu) ---> 11 model_trainer.train() in train(self) 19 20 def train(self): ---> 21 problem = self.experiment_utils.build_problem() 22 model = self.experiment_utils.build_model() 23 /content/GraphMask/codes/utils/experiment_utils.py in build_problem(self) 31 problem_class = QAProblem 32 ---> 33 problem = problem_class(self.configuration) 34 35 return problem /content/GraphMask/codes/problems/qa/qa_problem.py in __init__(self, configuration) 44 # self.predictor = pretrained.load_predictor("coref") 45 self.predictor = Predictor.from_path( ---> 46 "https://s3-us-west-2.amazonaws.com/allennlp/models/coref-model-2018.02.05.tar.gz") 47 # self.predictor = pretrained.load_predictor("coref-spanbert") 48 # self.predictor = Predictor.from_path( /usr/local/lib/python3.7/dist-packages/allennlp/predictors/predictor.py in from_path(cls, archive_path, predictor_name, cuda_device, dataset_reader_to_load, frozen, import_plugins, overrides) /usr/local/lib/python3.7/dist-packages/allennlp/models/archival.py in load_archive(archive_file, cuda_device, overrides, weights_file) 203 else: 204 logger.warning(f"Archived file {replacement_filename} not found! At train time " --> 205 f"this file was located at {original_filename}. This may be " 206 "because you are loading a serialization directory. Attempting to " 207 "load the file from its train-time location.") /usr/local/lib/python3.7/dist-packages/allennlp/models/archival.py in _load_dataset_readers(config, serialization_dir) 229 serialization_dir=serialization_dir, 230 cuda_device=cuda_device) --> 231 232 return Archive(model=model, config=config) 233 /usr/local/lib/python3.7/dist-packages/allennlp/common/from_params.py in from_params(cls, params, constructor_to_call, constructor_to_inspect, **extras) /usr/local/lib/python3.7/dist-packages/allennlp/common/params.py in pop_choice(self, key, choices, default_to_first_choice, allow_class_names) 350 This is to allow e.g. specifying a model type as my_library.my_model.MyModel 351 and importing it on the fly. Our check for "looks like" is extremely lenient --> 352 and consists of checking that the value contains a '.'. 353 """ 354 default = choices[0] if default_to_first_choice else self.DEFAULT ConfigurationError: coref not in acceptable choices for dataset_reader.type: ['babi', 'conll2003', 'interleaving', 'multitask', 'multitask_shim', 'sequence_tagging', 'sharded', 'text_classification_json']. You should either use the --include-package flag to make sure the correct module is loaded, or use a fully qualified class name in your config file like {"model": "my_module.models.MyModel"} to have it imported automatically.
neolifer commented 3 years ago

Looks like it has to do with the allennlp version as i notice the configuration style changed after v1.0. So i tried to install the right version which is 0.9.0 but it still wont work. The error message now is TypeError: ArrayField.empty_field: return type None is not a <class 'allennlp.data.fields.field.Field'>. Then i tried the newest version but the pretrained model with newest version is too large to run.

neolifer commented 3 years ago

Problem solved by install allennlp-models and replace spacy with version 2.2.1. The following is what i did. pip install allennlp==0.9.0 pip install -U allennlp-models pip install allennlp==0.9.0 pip install spacy==2.2.1 pip install -U nltk

Then replace the predictor in qa.problem by self.predictor = allennlp.pretrained.neural_coreference_resolution_lee_2017()