Open ajaysurya1221 opened 2 years ago
I guess this problem may caused by mismatch between allennlp and torch version. You can try to remove the constraint " torch===1.9.0" in requirements.txt and then let allennlp automatically download the most matched torch version.
(winenv) C:\Users\Ajay\Desktop\FramenetParser>bash train_parser.sh
train_parser.sh: line 14: allennlp: command not found
train_parser.sh: line 26: syntax error near unexpected token done' train_parser.sh: line 26:
done'
I'm still getting this :(
(winenv) C:\Users\Ajay\Desktop\FramenetParser>pip list Package Version
allennlp 2.10.0 appdirs 1.4.4 atomicwrites 1.4.1 attrs 22.1.0 base58 2.1.1 black 20.8b1 blis 0.7.8 boto3 1.24.43 botocore 1.27.43 cached-path 1.1.5 cachetools 5.2.0 catalogue 2.0.8 certifi 2022.6.15 charset-normalizer 2.1.0 click 8.1.3 colorama 0.4.5 commonmark 0.9.1 cymem 2.0.6 dill 0.3.5.1 docker-pycreds 0.4.0 fairscale 0.4.6 filelock 3.7.1 flake8 5.0.3 gitdb 4.0.9 GitPython 3.1.27 google-api-core 2.8.2 google-auth 2.9.1 google-cloud-core 2.3.2 google-cloud-storage 2.4.0 google-crc32c 1.3.0 google-resumable-media 2.3.3 googleapis-common-protos 1.56.4 h5py 3.7.0 huggingface-hub 0.8.1 idna 3.3 importlib-metadata 4.12.0 iniconfig 1.1.1 Jinja2 3.1.2 jmespath 1.0.1 joblib 1.1.0 langcodes 3.3.0 lmdb 1.3.0 MarkupSafe 2.1.1 mccabe 0.7.0 more-itertools 8.13.0 murmurhash 1.0.7 mypy 0.800 mypy-extensions 0.4.3 nltk 3.7 numpy 1.21.6 packaging 21.3 pathspec 0.9.0 pathtools 0.1.2 pathy 0.6.2 Pillow 9.2.0 pip 20.1.1 pluggy 1.0.0 preshed 3.0.6 promise 2.3 protobuf 3.20.0 psutil 5.9.1 py 1.11.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycodestyle 2.9.0 pydantic 1.8.2 pyflakes 2.5.0 Pygments 2.12.0 pyparsing 3.0.9 pytest 7.1.2 python-dateutil 2.8.2 PyYAML 6.0 regex 2022.7.25 requests 2.28.1 rich 12.1.0 rsa 4.9 s3transfer 0.6.0 sacremoses 0.0.53 scikit-learn 1.0.2 scipy 1.7.3 sentencepiece 0.1.96 sentry-sdk 1.9.0 setproctitle 1.3.0 setuptools 47.1.0 shortuuid 1.0.9 six 1.16.0 smart-open 5.2.1 smmap 5.0.0 spacy 3.3.1 spacy-legacy 3.0.9 spacy-loggers 1.0.3 srsly 2.4.4 tensorboardX 2.5.1 termcolor 1.1.0 thinc 8.0.17 threadpoolctl 3.1.0 tokenizers 0.12.1 toml 0.10.2 tomli 2.0.1 torch 1.11.0 torchvision 0.12.0 tqdm 4.64.0 traitlets 5.3.0 transformers 4.20.1 typed-ast 1.4.3 typer 0.6.1 typing-extensions 4.3.0 urllib3 1.26.11 wandb 0.12.21 wasabi 0.10.1 zipp 3.8.1
In Allennlp, they said "we presently do not support Windows but are open to contributions.". I'm sorry about that you may need to run this repo in a linux system.
Now in Ubuntu:
(venv) ajay@ROG:/mnt/c/Users/Ajay/Desktop/FramenetParser$ bash train_parser.sh
Traceback (most recent call last):
File "/mnt/c/Users/Ajay/Desktop/FramenetParser/venv/bin/allennlp", line 8, in {sub_return}
is not a {super_return}
."
TypeError: FramenetParserReader._read: return type None
is not a typing.Iterable[allennlp.data.instance.Instance]
.
train_parser.sh: line 26: syntax error near unexpected token done' train_parser.sh: line 26:
done'
Please Help
Try to remove overrides in each file, not only in dataset_reader.py. Or you can downgrade allennlp version to 2.3.0 by "pip install allennlp==2.3.0".
Thanks, @Ch4osMy7h worked like a charm.
sorry to bother you @Ch4osMy7h
File "/mnt/c/Users/Ajay/Desktop/FramenetParser/venv/lib/python3.7/site-packages/allennlp/common/params.py", line 423, in assert_empty
"Extra parameters passed to {}: {}".format(class_name, self.params)
allennlp.common.checks.ConfigurationError: Extra parameters passed to Checkpointer: {'keep_most_recent_by_count': 1}
train_parser.sh: line 26: syntax error near unexpected token done' train_parser.sh: line 26:
done'
full Traceback is too big
change "keep_most_recent_by_count" to "num_serialized_models_to_keep" in config file
now this.
File "/mnt/c/Users/Ajay/Desktop/FramenetParser/venv/lib/python3.7/site-packages/allennlp/modules/token_embedders/pretrained_transformer_embedder.py", line 195, in forward max_type_id = type_ids.max() RuntimeError: CUDA error: no kernel image is available for execution on the device
Which type of gpus you used? if you use RTX 3090 or Tesla A100, you may need to download the correct version of torch with cudatoolkit >= 11.0 from https://pytorch.org/get-started/previous-versions.
ok thanks :) @Ch4osMy7h
Hey, @Ch4osMy7h I've started training, and it is running fine. Thanks a lot, @Ch4osMy7h. I need to write an inference script for my end project and could you please share if you have written any? will be a great help. Thanks in advance. :)
I have shown an example code in README.md, you can see it at the end of page.
Thanks, @Ch4osMy7h I haven't seen an enthusiastic developer who replies this much quicker!
Hey @Ch4osMy7h, I started my training in colab pro +, and unfortunately, the runtime got disconnected when it hit the 62nd epoch. Do you have your best model saved somewhere which you can share via drive or something, please?
Could you please give me your individual email, i will send the checkpoints to you via it.
ajaysuryasenthilrajan@gmail.com
you're a saviour @Ch4osMy7h 😊
Hi @Ch4osMy7h, I'm at the final stage now and I'm facing one issue. could you please help?
Command : !allennlp predict --output-file output.txt --include-package framenet_parser --predictor framenet_parser --cuda-device 0 experiments/training/framenet_parser_v1.7_233 experiments/inference/sample.json
Traceback (most recent call last):
File "/usr/local/bin/allennlp", line 8, in
Available names: spacy-legacy.CharacterEmbed.v1, spacy-legacy.EntityLinker.v1, spacy-legacy.HashEmbedCNN.v1, spacy-legacy.MaxoutWindowEncoder.v1, spacy-legacy.MishWindowEncoder.v1, spacy-legacy.MultiHashEmbed.v1, spacy-legacy.Tagger.v1, spacy-legacy.TextCatBOW.v1, spacy-legacy.TextCatCNN.v1, spacy-legacy.TextCatEnsemble.v1, spacy-legacy.Tok2Vec.v1, spacy-legacy.TransitionBasedParser.v1, spacy.CharacterEmbed.v2, spacy.EntityLinker.v1, spacy.HashEmbedCNN.v2, spacy.MaxoutWindowEncoder.v2, spacy.MishWindowEncoder.v2, spacy.MultiHashEmbed.v2, spacy.PretrainCharacters.v1, spacy.PretrainVectors.v1, spacy.Tagger.v1, spacy.TextCatBOW.v1, spacy.TextCatCNN.v1, spacy.TextCatEnsemble.v2, spacy.TextCatLowData.v1, spacy.Tok2Vec.v2, spacy.Tok2VecListener.v1, spacy.TorchBiLSTMEncoder.v1, spacy.TransitionBasedParser.v1, spacy.TransitionBasedParser.v2
My current environment is: torch 1.11.0 allennlp 2.10.0 spacy 3.3.1
You can check it.
Thanks, @Ch4osMy7h, the code is working all good.
train_parser.sh: line 14: allennlp: command not found train_parser.sh: line 26: syntax error near unexpected token
done' train_parser.sh: line 26:
done'allennlp -v produced:
ModuleNotFoundError: No module named 'torch.ao.quantization'
Python version : 3.7.9 and pip install requirements.txt