dmmiller612 / bert-extractive-summarizer

Easy to use extractive text summarization with BERT
MIT License
1.4k stars 308 forks source link

Can we Fine-tune or Pre-train it? #57

Open ShoubhikBanerjee opened 4 years ago

ShoubhikBanerjee commented 4 years ago

Is there any way to fine-tune or pre train the model.with custom dataset??

dmmiller612 commented 4 years ago

You definitely can through the transformers library: https://github.com/huggingface/transformers . This library abstracts on top of this by performing different summarizations on top of embeddings.

ShoubhikBanerjee commented 4 years ago

I have already gone through the above link that you shared. But could not get any thing good for "extractive" summarization , fine-tune. Do you have any docs/notebook, etc.?

nvenkatesh2409 commented 4 years ago

This library abstracts on top of this by performing different summarizations on top of embeddings.

I pretrained the model with custom dataset and when passed this model as an input parameter (custom_model) am getting this error "IndexError: index -2 is out of bounds for dimension 0 with size 1". when I debug this am not getting the hiddenstates list much. plz help me on this.

abheet19 commented 3 years ago

please tell how to fine tune by freezing the layers and adding our own NN

frankShih commented 2 years ago

I have already gone through the above link that you shared. But could not get any thing good for "extractive" summarization , fine-tune. Do you have any docs/notebook, etc.?

Still can not understand how to fine-tune model for "extractive" text summarizaiton task

Besides, one link I found is about abstractive summarization https://huggingface.co/course/chapter7/5?fw=tf

Any suggestion?

frankShih commented 2 years ago

Same here,

I can only find this tutorial from hugging face webpage https://huggingface.co/course/chapter7/5?fw=tf

Which is for abstractive summarization fine-tuning (if I understand correctly)

Any suggestions?