-
Which way commonjs?
-
**Is your feature request related to a problem? Please describe.**
We are integrating with HF to get the top K TGI/TEI models and generate the deployment settings for these models. We need to have a …
-
Function calling is unusable for open-mixtral-8x22b and mistral-small-latest since monday. Was working fine on sunday. And still works fine with mistral-large-latest.
Code is from the tutorial at htt…
-
## Describe the bug
Hello, I found in PR #595 that Mistral Nemo Instruct 2407 was supported. It is working really well (using ISQ on HF safetensors).
Are GGUF supported too?
Using the Q8_0 fr…
-
### Your current environment
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Rocky Linux 8.8 (Green Obsidian) (x86_64)
G…
-
I am using a runpod container to run vLLM.
Template: runpod/pytorch:2.1.1-py3.10-cuda12.1.1-devel-ubuntu22.04
GPU Cloud: 1 x RTX 3090 | 12 vCPU 31 GB RAM
It works perfectly fine when I send 9 con…
-
I've installed the extension, edited the Settings to use OpenAI for the model, code completion, and (ada) embedding. Added my OpenAI key. No Wingman features work. They fail silently except for the ho…
-
### Bug Description
I'm using the Evaluator within a Flask app.
This works fine with all these LLMs:
- llama-index-llms-openai
- llama-index-llms-openailike
- llama-index-llms-together
But f…
-
The following code does not produce a log:
```
import logging
from mistralai.client import MistralClient
logger = logging.getLogger( __name__ )
logger.info( "Hello world" )
```
This is …
lg900 updated
5 months ago
-
## Description
Using a custom logging level e.g. `TRACE` results in an `ValueError` when `client_base.py` is imported.
In [`client_base.py`](https://github.com/mistralai/client-python/blob/80c79…