simonw / llm-mistral

LLM plugin providing access to Mistral models using the Mistral API
Apache License 2.0
144 stars 14 forks source link

llm-mistral

PyPI Changelog Tests License

LLM plugin providing access to Mistral models using the Mistral API

Installation

Install this plugin in the same environment as LLM:

llm install llm-mistral

Usage

First, obtain an API key for the Mistral API.

Configure the key using the llm keys set mistral command:

llm keys set mistral
<paste key here>

You can now access the Mistral hosted models. Run llm models for a list.

To run a prompt through mistral-tiny:

llm -m mistral-tiny 'A sassy name for a pet sasquatch'

To start an interactive chat session with mistral-small:

llm chat -m mistral-small
Chatting with mistral-small
Type 'exit' or 'quit' to exit
Type '!multi' to enter multiple lines, then '!end' to finish
> three proud names for a pet walrus
1. "Nanuq," the Inuit word for walrus, which symbolizes strength and resilience.
2. "Sir Tuskalot," a playful and regal name that highlights the walrus' distinctive tusks.
3. "Glacier," a name that reflects the walrus' icy Arctic habitat and majestic presence.

To use a system prompt with mistral-medium to explain some code:

cat example.py | llm -m mistral-medium -s 'explain this code'

Model options

All three models accept the following options, using -o name value syntax:

Refreshing the model list

Mistral sometimes release new models.

To make those models available to an existing installation of llm-mistral run this command:

llm mistral refresh

This will fetch and cache the latest list of available models. They should then become available in the output of the llm models command.

Embeddings

The Mistral Embeddings API can be used to generate 1,024 dimensional embeddings for any text.

To embed a single string:

llm embed -m mistral-embed -c 'this is text'

This will return a JSON array of 1,024 floating point numbers.

The LLM documentation has more, including how to embed in bulk and store the results in a SQLite database.

See LLM now provides tools for working with embeddings and Embeddings: What they are and why they matter for more about embeddings.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-mistral
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest