./decode.sh conll13st-test models/bpe/mlconvgec_aaai18_bpe.model models/dicts
This is the error
Traceback (most recent call last):
File "fairseq/interactive_multi.py", line 195, in <module>
main(args)
File "fairseq/interactive_multi.py", line 102, in main
models, model_args = utils.load_ensemble_for_inference(model_paths, task)
File "/home/liferay172/Documents/SundeepPidugu/crosentgec/fairseq/fairseq/utils.py", line 153, in load_ensemble_for_inference
state = torch.load(filename, map_location=lambda s, l: default_restore_location(s, 'cpu'))
File "/home/liferay172/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 426, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/liferay172/anaconda3/lib/python3.7/site-packages/torch/serialization.py", line 603, in _load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: could not find MARK
Please let me know if iam passing the parameters correctly and providing an example is much appreciated.
I was providing the wrong model directory.
the correct path would be something similar to below.
./decode.sh conll13st-test models/crosent/model1 models/dicts
./decode.sh conll13st-test models/bpe/mlconvgec_aaai18_bpe.model models/dicts
This is the errorPlease let me know if iam passing the parameters correctly and providing an example is much appreciated.