Open Dontmindmes opened 1 year ago
I am getting the same issue when trying to run QA-PubMedQA-BioGPT-Large
model using:
m = TransformerLanguageModel.from_pretrained(
"checkpoints/QA-PubMedQA-BioGPT-Large",
"checkpoint_avg.pt",
"data",
tokenizer="moses",
bpe="fastbpe",
bpe_codes="data/bpecodes",
min_len=100,
max_len_b=1024,
)
I am getting the same issue when trying to run
QA-PubMedQA-BioGPT-Large
model using:m = TransformerLanguageModel.from_pretrained( "checkpoints/QA-PubMedQA-BioGPT-Large", "checkpoint_avg.pt", "data", tokenizer="moses", bpe="fastbpe", bpe_codes="data/bpecodes", min_len=100, max_len_b=1024, )
What was the error you were getting
File "/Users/kristian/Desktop/medical-gpt/biogpt/BioGPT/test.py", line 4, in <module>
m = TransformerLanguageModel.from_pretrained(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fairseq/models/fairseq_model.py", line 267, in from_pretrained
x = hub_utils.from_pretrained(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fairseq/hub_utils.py", line 73, in from_pretrained
models, args, task = checkpoint_utils.load_model_ensemble_and_task(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fairseq/checkpoint_utils.py", line 432, in load_model_ensemble_and_task
task = tasks.setup_task(cfg.task)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/fairseq/tasks/__init__.py", line 42, in setup_task
assert (
AssertionError: Could not infer task type from {'_name': 'language_modeling_prompt', 'data': 'data',
'sample_break_mode': 'none', 'tokens_per_sample': 2048, 'output_dictionary_size': -1, 'self_target': False, 'future_target':
False, 'past_target': False, 'add_bos_token': False, 'max_target_positions': 2048, 'shorten_method': 'none',
'shorten_data_split_list': '', 'pad_to_fixed_length': False, 'pad_to_fixed_bsz': False, 'seed': 1, 'batch_size': None,
'batch_size_valid': None, 'dataset_impl': None, 'data_buffer_size': 10, 'tpu': False, 'use_plasma_view': False,
'plasma_path': '/tmp/plasma', 'source_lang': None, 'target_lang': None, 'max_source_positions': 1900, 'manual_prompt':
None, 'learned_prompt': 9, 'learned_prompt_pattern': 'learned', 'prefix': False, 'sep_token': '<seqsep>'}. Available
argparse tasks: dict_keys(['sentence_prediction', 'sentence_prediction_adapters', 'speech_unit_modeling',
'hubert_pretraining', 'denoising', 'multilingual_denoising', 'translation', 'multilingual_translation',
'translation_from_pretrained_bart', 'translation_lev', 'language_modeling', 'speech_to_text', 'legacy_masked_lm',
'text_to_speech', 'speech_to_speech', 'online_backtranslation', 'simul_speech_to_text', 'simul_text_to_text',
'audio_pretraining', 'semisupervised_translation', 'frm_text_to_speech', 'cross_lingual_lm',
'translation_from_pretrained_xlm', 'multilingual_language_modeling', 'audio_finetuning', 'masked_lm', 'sentence_ranking',
'translation_multi_simple_epoch', 'multilingual_masked_lm', 'dummy_lm', 'dummy_masked_lm', 'dummy_mt']). Available
hydra tasks: dict_keys(['sentence_prediction', 'sentence_prediction_adapters', 'speech_unit_modeling',
'hubert_pretraining', 'translation', 'translation_lev', 'language_modeling', 'simul_text_to_text', 'audio_pretraining',
'translation_from_pretrained_xlm', 'multilingual_language_modeling', 'audio_finetuning', 'masked_lm', 'dummy_lm',
'dummy_masked_lm'])
Hello @Kristian-A @Dontmindmes, were you ever able to fix this?
I'm having the same issue. I've fine tuned BioGPT on a relation extraction task, and the training with fairseq was just fine, but now I can't evaluate it.
Was anyone able to solve this?
Hello when i execute the following code i get the following error (Windows 11) `
`
`
`