obsei / obsei

Obsei is a low code AI powered automation tool. It can be used in various business flows like social listening, AI based alerting, brand image analysis, comparative study and more .
https://obsei.com/
Apache License 2.0
1.2k stars 157 forks source link

Better offline support for transformers #86

Open GirishPatel opened 3 years ago

GirishPatel commented 3 years ago

Is your feature request related to a problem? Please describe. For deployment of dockers on data centres, models needs to be cache locally. Either copying manually/scripts or auto-downloaded by code. This should provide offline access to models. Since models are in huge size (in GBs), need to improve upon frequent upload or download of models.

Describe the solution you'd like Transformers can run models offline by using environment variable - TRANSFORMERS_OFFLINE=1. This is documented here - https://huggingface.co/transformers/installation.html#offline-mode We can achieve auto download for first time by code with similar logic as raised in PR - spacy download

Describe alternatives you've considered

  1. Manual copy model and code to look for it. Need mounting of disk by docker.
  2. Create docker image with model. Too big image.
shahrukhx01 commented 3 years ago

@lalitpagaria 1. Could you share the Dockerfile which you built?

  1. What are the names of models which tried it for?
  2. Maybe we can use a lightweight version (distilled) of the same model?
  3. How about something like this? A get model script and then we can execute this script while building the image.
from transformers import AutoModelForQuestionAnswering, AutoTokenizer

def get_model(model):
  """Loads model from Hugginface model hub"""
  try:
    model = AutoModelForQuestionAnswering.from_pretrained(model,use_cdn=True)
    model.save_pretrained('./model')
  except Exception as e:
    raise(e)

def get_tokenizer(tokenizer):
  """Loads tokenizer from Hugginface model hub"""
  try:
    tokenizer = AutoTokenizer.from_pretrained(tokenizer)
    tokenizer.save_pretrained('./model')
  except Exception as e:
    raise(e)

get_model('<model-name>')
get_tokenizer('<tokenizer-name>')

Then in Dockerfile we can just use the copy COPY ./model .

lalitpagaria commented 3 years ago

@shahrukhx01 it is not about Dockerfile. Mainly passing custom model path and disabling internet cause issue with transformers lib. It is difficult to download model (bin, vocab etc) from hugginface model hub directly and using it does not work.

In following two lines, first line will download the model and another will copy it to different location.

    model = AutoModelForQuestionAnswering.from_pretrained(model,use_cdn=True)
    model.save_pretrained('./model')

I tried to deploy model on few MLOps platforms, added model as part of my s3 bucket but transformers lib always tried to download model.

shahrukhx01 commented 3 years ago

Could you paste the snippet of how you tried to load the model, as far I remember, I did the same thing using the above script alongside the inference code and it worked fine for me in offline mode. The only difference in my case was I stored the model files in the docker image itself.

lalitpagaria commented 3 years ago

hmm let me verify again and then update this thread.