patil-suraj / question_generation

Neural question generation using transformers
MIT License
1.1k stars 348 forks source link

Does this support only SQUAD dataset? #41

Open nabinkhadka opened 4 years ago

nabinkhadka commented 4 years ago

I was wondering if I can simply give any dataset. It looks like it needs questions, answer and context. So I suppose the following dataset for example is sufficient to do training?

questions | answer | context q1 | a1 | context 1 q2 | a2 | context 2 q3 | a3 | context 3 q4 | a4 | context 4

This way can I train by loading the dataset and leaving rest of the notebook same?

valid_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[:10%]')
train_dataset = nlp.load_dataset('csv', data_files='/content/drive/My Drive/context_created.csv', split='train[10%:]')
WadoodAbdul commented 4 years ago

Yes, any data set can be used to train the models.

If you input the data in squad format, and change the directory of the train and valid split generator in squad_multitask.py, you'll be good to go.

nabinkhadka commented 4 years ago

@WadoodAbdul have you tried it? Were you able to load the model correctly? If so can you give me the snippet please?

Thank you

patil-suraj commented 4 years ago

Thanks for answering the issue @WadoodAbdul .

For now squad_multitask is tied to SQuAD dataset, but it's possible to use your own QA dataset as @WadoodAbdul said

If you don't want to use that script then you can use a custom dataset as follows.

  1. Process your dataset according to the format for the model (described in readme). You can use the code from prepare_data.py script
  2. Make sure the dataset returns source_ids, target_ids and attention_mask
  3. Use your dataset here instead of loading the cached data.

Rest of the code can stay the same. Let me know if this helps or not.

hariesramdhani commented 3 years ago

Hi patil-suraj, thank you for the nice repo. Is it possible for you to show us how to fine tune using custom dataset, I've tried the changing link in squad_multitask.py approach but somehow it keeps failing and I had no luck with the direct loading from the nlp.load_dataset in prepare_data.py

The followings are my datasets:

Thank you very much

pat266 commented 2 years ago

@hariesramdhani I'm not sure if you are still stuck, but you need to include data_files= '/path/to/file' in the nlp.load_dataset() (source: https://huggingface.co/docs/datasets/v0.4.0/add_dataset.html) Example: natural_question = nlp.load_dataset("natural_questions", data_files= '/content/drive/My Drive/natural_questions', split=nlp.Split.TRAIN)