dmmiller612 / bert-extractive-summarizer

Easy to use extractive text summarization with BERT
MIT License
1.38k stars 305 forks source link

How can I manually provide a cache path for the model? #71

Open tomaszgarbus opened 4 years ago

tomaszgarbus commented 4 years ago

For my use case I want to have all cached models (coming from different repos, mostly fro huggingface) in one directory. How can I provide a cache path to the Summarizer?

Allenhuang0708 commented 3 years ago

I guess it didn't provide an api for it yet, but I foud a way to modify this by revised the site-packages/transformers/modeling_utils.py, we need to set the pretrained_model_name_or_path to your dir in from_pretrained method.

This can be fixed more appropriately by modifying the Summarizer, I am trying to add a pull request for this as well.

frederick0291 commented 2 years ago

I have the same question. Looks like I will have to follow Allen's suggestion for now.