Closed hzahera closed 2 years ago
@hzahera I experienced the same issue. That's what I did to resolve the issue and it worked.
In file: onmt.model_builder
In function load_test_model
at line 92
modified -> if hasattr(opt, 'fairseq_model'): original -> if opt.fairseq_model:
Thanks a lot for your reply. this modification works, but I got a new error
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_15784/2277166719.py in <module>
6 setattr(opt, 'models', [one2seq_ckpt_path])
7
----> 8 translator = translator.build_translator(opt, report_score=False)
/data/OpenNMT-kpg-release/onmt/translate/translator.py in build_translator(opt, report_score, logger, out_file)
63 )
64 else:
---> 65 translator = Translator.from_opt(
66 model,
67 fields,
/data/OpenNMT-kpg-release/onmt/translate/translator.py in from_opt(cls, model, fields, opt, model_opt, global_scorer, out_file, report_align, report_score, logger)
330 logger=logger,
331 seed=opt.seed,
--> 332 kp_concat_type=opt.kp_concat_type,
333 model_kp_concat_type=model_opt.kp_concat_type,
334 beam_terminate=opt.beam_terminate,
AttributeError: 'Namespace' object has no attribute 'kp_concat_type'
@hzahera Going through the same struggles as you are rn. What I did for the kp_concat_type is directly modify in translator.py the line
kp_concat_type=opt.kp_concat_type,
with
kp_concat_type='one2one',
Not the most elegant solution, but probably the easiest one (you can also replace with one2many if that's what you're using)
Thank you for your reply. This helps me to fix the bug!
However, I got a new related error :D @smolPixel @haseeb33
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/tmp/ipykernel_17266/3398033325.py in <module>
2 src=[text_to_extract],
3 tgt=None,
----> 4 src_dir=opt.src_dir,
5 batch_size=opt.batch_size,
6 attn_debug=opt.attn_debug,
AttributeError: 'Namespace' object has no attribute 'src_dir'
@hzahera If I remember correctly, I changed opt with model_opt for this series of errors and worked for me. It took me days to make it work so I don't remember each step. But one big step was changing the opt. with model_opt because model_opt is from the start of the program and this opt is somewhere created in the middle and I don't know the usage of it.
P.S. Maybe I am wrong but it worked for me.
@haseeb33 Unfortunately it doesn't work. Can you please share your code?
Thanks,
@hzahera Apologies. I don't think the new code is perfectly compatible with the previous checkpoints. The parameters should be reloadable but some configs could be buggy.
We are already working on a tutorial and will release some new checkpoints and notebooks.
@memray Sorry to bug you, do you have any update on the tutorial and new checkpoints? Thank you very much!
Hi @smolPixel , we are developing a new package and some demos on the basis of huggingface transformer. But no doubt it will not work for previous RNN models. I will upload some new checkpoints and update the notebook a bit, hopefully by next weekend. Please stay tuned.
Hi,
I am trying to use a pre-trained model from the model.zip, namely,
kp20k-meng17-verbatim_append-rnn-BS64-LR0.05-Layer1-Dim150-Emb100-Dropout0.0-Copytrue-Reusetrue-Covtrue-PEfalse-Contboth-IF1_step_50000.pt
I got an error when running the inference notebook example https://github.com/memray/OpenNMT-kpg-release/blob/master/notebook/inference.ipynb
Got error:
Can you please help me to run a pretrained model to generate keyphrases from text?