Closed tiagodavi closed 1 year ago
Yes, it downloads them to your operating system cache directory.
You cannot use any model. Their architecture needs to be supported. The error means it isn't supported and we have an issue to improve the error message.
The best way to use models is through the task APIs, such as: https://hexdocs.pm/bumblebee/Bumblebee.Text.html - or use Livebook! Inside a notebook, click on "Smart cell" and pick the "Neural network" task.
@tiagodavi that repo is missing the tokenizer.json
file, I opened a PR to add it, though hard to tell when/if it's going to be merged. That said you can load the tokenizer with this:
Bumblebee.load_tokenizer(
{:hf, "gorilla-llm/gorilla-7b-hf-delta-v0",
revision: "dd40c8cb4494be82bef3cb9f8e4841291a3431df"}
)
Hi Folks, Thank you for the awesome library.
1 - What happens when we call Bumblebee.load_model(repository) ? It downloads the entire model to somewhere in my computer? Where?
2 - How to use a custom model? For example, I would like to use a generative model to turn markdown into html by using a prompt and I tried:
3 - How to use
https://hexdocs.pm/bumblebee/Bumblebee.Text.Llama.html
I didn't understand how to call it with a prompt to solve my task.