sgomez / ollama-ai-provider

Vercel AI Provider for running LLMs locally using Ollama
https://www.npmjs.com/package/ollama-ai-provider
Other
150 stars 18 forks source link

Allow setting object for additional ollama settings #27

Closed caymaynard closed 2 months ago

caymaynard commented 2 months ago

First off, I want to say thanks for making this provider, makes working with Ollama from Javascript a breeze.

Is your feature request related to a problem? Please describe. I've been looking at adding m_lock to my API requests (present in ollama API, and I noticed it is not currently supported by the provider.

Describe the solution you'd like While having m_lock as an extra provider option would be enough for me, I think it would be more helpful to allow specifying an object as part of the OllamaChatSettings, which could be passed directly to the API request. That way, any new flags can be added immediately, without requiring an update to the provider.

Describe alternatives you've considered A workaround is to provide a custom fetch implementation, which adds the settings manually. This is not very nice though, I don't want to mimic fetch just to add a simple option.

sgomez commented 2 months ago

The problem with accepting any new flag without type checking is that you can misspell some of them.

For example, is use_mlock, instead m_lock. And you have no way to detect the problem in your IDE or in the linter.

I have followed this document to add the options https://github.com/ollama/ollama/blob/main/docs/modelfile.md but it doesn't seem to be updated or completed.

I will post a new version tomorrow with a revised version of the options allowed.

Thanks!

sgomez commented 2 months ago

Closed in #28 and released