microsoft / ProphetNet

A research project for natural language generation, containing the official implementations by MSRA NLC team.
MIT License
654 stars 105 forks source link

Loading the model #25

Open shravankumar33 opened 3 years ago

shravankumar33 commented 3 years ago

The model gets loaded every time fairseq-generate is called to get a summary. Is there any way to avoid the model loading everytime I want to do an inference? Is there any possible way to first pre-load the model and then the inference on it? Thanks.

ShoubhikBanerjee commented 3 years ago

You can try Loading custom models: from this link, i.e.

from fairseq.models.transformer import TransformerModel zh2en = TransformerModel.from_pretrained( '/path/to/checkpoints', checkpoint_file='checkpoint_best.pt', data_name_or_path='data-bin/wmt17_zh_en_full', bpe='subword_nmt', bpe_codes='data-bin/wmt17_zh_en_full/zh.code' ) zh2en.translate('你好 世界')

'Hello World'