-
As we approach running a Spike/PoC we want to break it down into steps.
## Hypothesis
We believe that if we can create the ability to embed an CPU based LLM into the CLI we can enable users to g…
-
Hi,
I'm trying to use ollama-js in a vite.js project.
What I did:
initialize a new vite.js project:
`npm create vite@latest`
`Project name: test-ollama`
Selected: Vanilla
`Select a variant: Jav…
-
Would be nice to support self hosted LLMs. It doesn't have to be ollama, but it seems to be fairly easy to interface with.
-
I had written this at a time where the Ollama api didn't exist.
There is a lot of bloat around langchain and I'd like to get to something a bit more performant, especially when indexing and piping …
-
Support OpenAI API format by giving option to switch between Ollama proprietary API format and OpenAI API format.
To fetch list of models - https://platform.openai.com/docs/api-reference/models/lis…
-
**Description:**
One of the reasons Ollama is so widely adopted as a tool to run local models is its ease of use and seamless integration with other tools. Users can simply install an app that star…
-
Great piece of software @d42me! It'd be pretty easy to code the two main LLM entry points to allow a range of interfaces instead of just OpenAI. In particular, using ollama would open the whole system…
-
Having access to the models tokenizer is extremely useful for counting tokens, and managing the context window. In a lot of cases its essential to get an LLM implementation to work properly. The model…
-
I am trying to finetune Mistral 7B and LLama 3.1 for better performance on coding in a niche programming language.
And while I do get some interesting results using the Unsloth notebooks on Colab, …
-
Integrate AI capabilities into Seabreeze CLI to provide intelligent assistance for Docker and Seabreeze commands.
## Details
The AI feature will:
- Introduce a new Seabreeze command `ai` to h…