dmmiller612 / bert-extractive-summarizer

Easy to use extractive text summarization with BERT
MIT License
1.37k stars 307 forks source link

Which kind of model should I choose? #139

Open frankShih opened 1 year ago

frankShih commented 1 year ago

Since I cannot find any "rules" from the examples on README, I suppose that I can choose any pre-train model I want from HuggingFace regardless of which kind of task that model is trained for.

However, based on my understanding, shouldn't I choose models with the tags "task:SentenceSimilarity" or "task:Summarization"?

Any suggestions?