Vercel AI Provider for running Large Language Models locally using Ollama
Note: This module is under development and may contain errors and frequent incompatible changes.
All releases will be of type MAJOR following the 0.MAJOR.MINOR scheme. Only bugs and model updates will be released as MINOR. Please read the Tested models and capabilities section to know about the features implemented in this provider.
The Ollama provider is available in the ollama-ai-provider
module. You can install it with
npm i ollama-ai-provider
You can import the default provider instance ollama
from ollama-ai-provider
:
import { ollama } from 'ollama-ai-provider';
If you need a customized setup, you can import createOllama
from ollama-ai-provider
and create a provider instance with your settings:
import { createOllama } from 'ollama-ai-provider';
const ollama = createOllama({
// custom settings
});
You can use the following optional settings to customize the Ollama provider instance:
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers.
The default prefix is http://localhost:11434/api
.
headers Record<string,string>
Custom headers to include in the requests.
The first argument is the model id, e.g. phi3
.
const model = ollama('phi3');
Inside the examples
folder, you will find some example projects to see how the provider works. Each folder
has its own README with the usage description.
This provider is capable of generating and streaming text and objects. Object generation may fail depending on the model used and the schema used.
At least it has been tested with the following features:
Image input | Object generation | Tool usage | Tool streaming |
---|---|---|---|
:white_check_mark: | :white_check_mark: | :white_check_mark: | :warning: |
You need to use any model with visual understanding. These are tested:
Ollama does not support URLs, but the ai-sdk is able to download the file and send it to the model.
This feature is unstable with some models
Some models are better than others. Also, there is a bug in Ollama that sometimes causes the JSON generation to be slow or
end with an error. In my tests, I detected this behavior with llama3 and phi3 models more than others like
openhermes
and mistral
, but you can experiment with them too.
More info about the bugs:
Remember that Ollama and this module are free software, so be patient.
Ollama has introduced support for tooling, enabling models to interact with external tools more seamlessly. Please, see the list of models with tooling support in the Ollama site.
Caveats:
This feature is not completed and unstable
Ollama tooling does not support it in streams, but this provider can detect tool responses.
You can disable this experimental feature with `` setting:
ollama("model", {
experimentalStreamTools: false,
})
This provider supports Intercepting Fetch Requests.
Provider management is an experimental feature
This provider supports Provider Management.