-
### Describe the feature
Vercel's `ai` package has built-in support for Mistral. We just need to add the right APIs to also support Mistral, probably similar to what we did with Azure.
Docs: htt…
-
### Problem Statement
Add support for https://docs.sentry.io/product/insights/llm-monitoring/ in JavaScript
### Solution Brainstorm
Things we can support:
- https://github.com/getsentry/sentry-jav…
-
### Description
I am using streamText with the [Azure OpenAI provider for the AI SDK](https://sdk.vercel.ai/providers/ai-sdk-providers/azure) and there models. I use [createAzure](https://sdk.vercel.…
-
### Python Version
```shell
Appreciate any help solving the issue...
(I've seeing in other threads people blaming this type of crashes on the CPU memory, but a g4dn.12xlarge has 192Gb RAM. So unless…
-
# Problem
Since `token.js` currently integrates with every LLM provider using their javascript SDK, the package size is much larger than necessary. This can impact the performance of backend services…
-
Hi there,
Running the latest version of the SDK 0.1.3, but when I try to init and call the client, it does not return anything.
Here is my code:
```ts
const mistral = new MistralClient(env.PUB…
-
### Python -VV
```shell
Python 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
```
### Pip Freeze
```shell
accelerate==0.32.1
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4…
-
### Operating System
MacOS
### Version Information
not relevant
### Steps to reproduce
https://github.com/Azure/azureml-examples/blob/main/sdk/python/foundation-models/mistral/litellm.ipynb
@san…
-
## 🐛 Bug
I tried to use Mistral Small 7B Instruct v0.3 as draft model for Mistral Large 2407. When not served using "--mode server", the model(s) never respond. I think that's because only CPU is u…
-