This project provides a Python script to import a specific model file from a Hugging Face repository into Ollama. The script allows you to list available files in the repository, select a specific file to download, and create a metafile required by Ollama. You can choose to run the ollama create
command directly from the script or manually after editing the metafile.
ollama create
command.ollama create
.huggingface_hub
librarysubprocess
module (part of Python standard library)ollama
command-line toolFor sftoguff.py, you'll need llama.cpp installed and functional on your system.
Install the required Python library:
pip install huggingface_hub
Ensure you have the ollama
command-line tool installed and properly configured.
Run the script:
python main.py
Enter the Hugging Face model ID when prompted.
Select the file you want to download from the list of available files.
If the file already exists locally, decide whether to redownload it or skip.
Confirm if you want to run the ollama create
command:
.guff
extension).metafile.txt
.Run the script:
python sftoguff.py
Enter the Hugging Face model ID when prompted.
The script will download and convert the model to a .guff file.
It will ask you if want to import into ollama, and if so, it'll launch main.py
$ python main.py
Enter the Hugging Face model ID: bert-base-uncased
Available files in the repository:
1. config.json
2. pytorch_model.bin
3. vocab.txt
Enter the number of the file you want to download: 2
File 'pytorch_model.bin' already exists. Do you want to redownload it? (yes/no): no
Do you want to proceed with the 'ollama create' command? (yes/no): yes
Enter the name for the model (default: pytorch_model): my_custom_model
Model imported successfully!
This project is licensed under the MIT License. See the LICENSE file for details.
Contributions are welcome! Please open an issue or submit a pull request with your changes.
For any questions or issues, please post an issue on this repository.