MistralSharp is an unofficial .NET SDK for the Mistral AI Platform. Great for building AI-enhanced apps!
Start by downloading the nuget package and importing it into your project.
Check out the Sample project to see an example of how to use the library in a simple console application.
To access the API endpoints, create a new instance of the MistralClient
class and pass in your API key:
var mistralClient = new MistralClient(apiKey);
services.AddMistral(options =>
{
options.ApiKey = "YOUR_API_KEY";
});
This endpoint returns a list of available AI models on the Mistral platform.
var models = await mistralClient.GetAvailableModelsAsync();
This method allows you to chat with an AI model of your choice. To start a chat, first create a new ChatRequest
object (note: only Model and Messages are required, the other parameters are optional and will default to the values
specified below):
var chatRequest = new ChatRequest()
{
// The ID of the model to use. You can use GetAvailableModelsAsync() to get the list of available models
Model = ModelType.MistralMedium,
// Pass a list of messages to the model.
// The role can either be "user" or "agent"
// Content is the message content
Messages =
[
new Message()
{
Role = "user",
Content = "How can Mistral AI assist programmers?"
}
],
//The maximum number of tokens to generate in the completion.
// The token count of your prompt plus max_tokens cannot exceed the model's context length.
MaxTokens = 64,
// Default: 0.7
// What sampling temperature to use, between 0.0 and 2.0.
// Higher values like 0.8 will make the output more random, while lower values like 0.2 will make
// it more focused and deterministic.
Temperature = 0.7,
// Default: 1
// Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
// So 0.1 means only the tokens comprising the top 10% probability mass are considered.
// Mistral generally recommends altering this or temperature but not both.
TopP = 1,
// Default: false
// Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events
// as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will
// hold the request open until the timeout or until completion, with the response containing the full
// result as JSON.
Stream = false,
// Default: false
// Whether to inject a safety prompt before all conversations.
SafePrompt = false,
// Default: null
// The seed to use for random sampling. If set, different calls will generate deterministic results.
RandomSeed = null
};
Finally, call the ChatAsync()
method and pass the ChatRequest
object:
var sampleChat = await mistralClient.ChatAsync(chatRequest);
Operates the same as ChatAsync()
except with support for streaming back partial progress
(ChatRequest.Stream set to true). Returns an IAsyncEnumerable<ChatResponse>
.
NOTE: This will implemented in an upcoming release as it's still being worked on.
The embeddings API allows you to embed sentences and can be used to power a RAG application. To use it, first
create a create a new EmbeddingRequest
object:
var embeddings = new EmbeddingRequest()
{
// The ID of the model to use for this request.
Model = ModelType.MistralEmbed,
// The format of the output data.
EncodingFormat = "float",
// The list of strings to embed.
Input = new List<string>()
{
"Hello",
"World"
}
};
Lastly, pass the EmbeddingRequest
object to ChatEmbeddingsAsync()
method:
var embeddedResponse = await mistralClient.CreateEmbeddingsAsync(embeddings);