yzhangcs / parser

:rocket: State-of-the-art parsers for natural language.
https://parser.yzhang.site/
MIT License
825 stars 138 forks source link

Error------>Parser.load('biaffine-dep-zh') #107

Closed KangChou closed 2 years ago

KangChou commented 2 years ago

test:


from supar import Parser
parser = Parser.load('biaffine-dep-zh')
print("加载训练好的模型:",parser)
dataset = parser.predict('我爱你中国。', lang='zh', prob=True, verbose=False)
print(dataset)
加载训练好的模型: <supar.parsers.dep.BiaffineDependencyParser object at 0x7f8d33b37550>
Traceback (most recent call last):
  File "supar_demo.py", line 9, in <module>
    dataset = parser.predict('我爱你中国。', lang='zh', prob=True, verbose=False)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/parsers/dep.py", line 124, in predict
    return super().predict(**Config().update(locals()))
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/parsers/parser.py", line 131, in predict
    dataset = Dataset(self.transform, data, lang=lang)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/utils/data.py", line 37, in __init__
    self.sentences = transform.load(data, **kwargs)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/utils/transform.py", line 334, in load
    data = [tokenizer(i) for i in ([data] if isinstance(data, str) else data)]
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/utils/transform.py", line 334, in <listcomp>
    data = [tokenizer(i) for i in ([data] if isinstance(data, str) else data)]
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/supar/utils/tokenizer.py", line 15, in __call__
    return [i.text for i in self.pipeline(text).sentences[0].tokens]
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/pipeline/core.py", line 210, in __call__
    doc = self.process(doc)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/pipeline/core.py", line 204, in process
    doc = process(doc)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/pipeline/tokenize_processor.py", line 92, in process
    no_ssplit=self.config.get('no_ssplit', False))
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/models/tokenization/utils.py", line 143, in output_predictions
    pred = np.argmax(trainer.predict(batch), axis=2)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/models/tokenization/trainer.py", line 67, in predict
    pred = self.model(units, features)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/stanza/models/tokenization/model.py", line 49, in forward
    inp, _ = self.rnn(emb)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 659, in forward
    self.check_forward_args(input, hx, batch_sizes)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 605, in check_forward_args
    self.check_input(input, batch_sizes)
  File "/home/zkpark/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/rnn.py", line 204, in check_input
    self.input_size, input.size(-1)))
RuntimeError: input.size(-1) must be equal to input_size. Expected 45, got 37
github-actions[bot] commented 2 years ago

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] commented 2 years ago

This issue was closed because it has been inactive for 7 days since being marked as stale.