p208p2002 / Transformer-QG-on-SQuAD

Implement Question Generator with SOTA pre-trained Language Models (RoBERTa, BERT, GPT, BART, T5, etc.)
https://huggingface.co/p208p2002/bart-squad-qg-hl
47 stars 3 forks source link

How to use the hugging face method to infer? #3

Closed yanshuaibupt closed 1 year ago

yanshuaibupt commented 1 year ago

Thanks for your excellent job! And when I load the tokenizer and model from hugging face, how can I to use text as input to infer/generate a question?

yanshuaibupt commented 1 year ago

input_text = "Harry Potter is a series of seven fantasy novels written by British author, [HL]J. K. Rowling[HL]." inp = tokenizer(input_text, return_tensors='pt') generation_output = model.generate(inputs=inp["input_ids"].to(model.device)) response = tokenizer.decode(generation_output[0], skip_special_tokens=True)