cooelf / SemBERT

Semantics-aware BERT for Language Understanding (AAAI 2020)
https://arxiv.org/abs/1909.02209
MIT License
286 stars 55 forks source link

srl is not a registered name for Model. #15

Closed AkibSadmanee closed 3 years ago

AkibSadmanee commented 4 years ago

I put the extracted srl_model in srl_model_dir.

> ~/SemBERT/srl_model_dir$ ls
> config.json  files_to_archive.json  fta  vocabulary  weights.th

But I get the error mentioned below.

08/06/2020 18:50:49 - INFO - __main__ -   device: cpu n_gpu: 0, distributed training: False, 16-bits training: False
08/06/2020 18:50:49 - INFO - pytorch_pretrained_bert.tokenization -   loading vocabulary file snli_model_dir/vocab.txt
08/06/2020 18:50:49 - INFO - allennlp.models.archival -   loading archive file srl_model_dir
Traceback (most recent call last):
  File "run_snli_predict.py", line 598, in <module>
    main()
  File "run_snli_predict.py", line 464, in main
    srl_predictor = SRLPredictor(args.tagger_path)
  File "/home/akib/SemBERT/tag_model/tagging.py", line 7, in __init__
    self.predictor = Predictor.from_path(SRL_MODEL_PATH)
  File "/home/akib/.local/lib/python3.8/site-packages/allennlp/predictors/predictor.py", line 275, in from_path
    load_archive(archive_path, cuda_device=cuda_device),
  File "/home/akib/.local/lib/python3.8/site-packages/allennlp/models/archival.py", line 192, in load_archive
    model = Model.load(
  File "/home/akib/.local/lib/python3.8/site-packages/allennlp/models/model.py", line 391, in load
    model_class: Type[Model] = cls.by_name(model_type)  # type: ignore
  File "/home/akib/.local/lib/python3.8/site-packages/allennlp/common/registrable.py", line 137, in by_name
    subclass, constructor = cls.resolve_class_name(name)
  File "/home/akib/.local/lib/python3.8/site-packages/allennlp/common/registrable.py", line 184, in resolve_class_name
    raise ConfigurationError(
allennlp.common.checks.ConfigurationError: srl is not a registered name for Model. You probably need to use the --include-package flag to load your custom code. Alternatively, you can specify your choices using fully-qualified paths, e.g. {"model": "my_module.models.MyModel"} in which case they will be automatically imported correctly.

This is my file system tree for reference:

README.md
├── SemBERT.png
├── data_process
│   ├── __init__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-38.pyc
│   │   └── datasets.cpython-38.pyc
│   ├── datasets.py
│   └── util.py
├── glue_data
│   └── MNLI
│       ├── dev_matched.tsv_tag_label
│       ├── test_matched.tsv_tag_label
│       └── train.tsv_tag_label
├── output
├── pytorch_pretrained_bert
│   ├── __init__.py
│   ├── __main__.py
│   ├── __pycache__
│   │   ├── __init__.cpython-38.pyc
│   │   ├── file_utils.cpython-38.pyc
│   │   ├── modeling.cpython-38.pyc
│   │   ├── optimization.cpython-38.pyc
│   │   └── tokenization.cpython-38.pyc
│   ├── file_utils.py
│   ├── modeling.py
│   ├── optimization.py
│   └── tokenization.py
├── run_classifier.py
├── run_scorer.py
├── run_snli_predict.py
├── snli_model_dir
│   ├── bert_config.json
│   ├── pytorch_model.bin
│   └── vocab.txt
├── srl_model_dir
│   ├── config.json
│   ├── files_to_archive.json
│   ├── fta
│   │   ├── model.text_field_embedder.elmo.options_file
│   │   └── model.text_field_embedder.elmo.weight_file
│   ├── vocabulary
│   │   ├── labels.txt
│   │   ├── non_padded_namespaces.txt
│   │   └── tokens.txt
│   └── weights.th
└── tag_model
    ├── __pycache__
    │   ├── modeling.cpython-38.pyc
    │   ├── tag_tokenization.cpython-38.pyc
    │   └── tagging.cpython-38.pyc
    ├── modeling.py
    ├── tag_tokenization.py
    ├── tagger_offline.py
    └── tagging.py
cooelf commented 4 years ago

Please try pip install --pre allennlp-models