NolanoOrg / cformers

SoTA Transformers with C-backend for fast inference on your CPU.
MIT License
311 stars 29 forks source link

Added ability to load local models, added early stopping, remove vocab check, fixed GPTJ model conversion #38

Open mallorbc opened 1 year ago

mallorbc commented 1 year ago

This PR is built on top of the pip package PR. Merge that PR first and if that PR changes those changes will need to be merged first.

In this PR I added the ability to load a model from a local path.

I also added early stopping allowing one to stop based on some token. Related to this, I changed the default behavior of waiting for the subprocess. Using these two together, I was able to achieve a large speedup for my desired task.

I removed the check for vocab size. When finetuning the vocab size can change, especially for GPTJ. Figuring out this was causing issues was a source of headache and is needed for fientuned models.

I also made it so that instead of using the Huggingface vocab and config, if we give a local path we will use those files in that folder, again common for finetuning.

This repo was the only one I could get working for GPTJ. Other repos are using a different GGML format. However, those other repos keep the model in memory with pybindings, such as pyllamacpp. If that feature was added, this repo would be great as suggested #36