IBM / text-generation-inference

IBM development fork of https://github.com/huggingface/text-generation-inference
Apache License 2.0
57 stars 30 forks source link

fix: fast tokenizer conversion should happen offline #106

Closed tjohnson31415 closed 3 months ago

tjohnson31415 commented 3 months ago

Motivation

The server is launched with HF_HUB_OFFLINE=1 and is meant to treat model files as read-only; however, the fast tokenizer conversion happening in the launcher does not follow this (if a revision is not passed). This can cause problems if a model in HF Hub is updated and the tokenizer conversion downloads the tokenizer files for the new commit of the model but then the server doesn't download the new model files... the server fails to load because it can't find the model files.

Modifications

Result

Fast tokenizer conversion in the launcher should never download new files.

Related Issues