-
### Your current environment
```text
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.2.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used …
-
Thank you for your work! Can we please make orjson dependency optional? I have concerns about the quality of this dependency (compatibility, security) and would prefer to not use it in my projects.
…
-
Hi!
What is the way to call (Mistral Large) functions with this client?
I did not find even any mentions about functions on API documentation (https://docs.mistral.ai/api/#operation/createChatCo…
-
### Description
This is a copy of iterative/dvcx#1663 from dvcx. Raising priority because of the frequent occurence
### Description
Let us assume we have a wrong API key to simulate a UDF err…
-
### Describe the bug
Crash with abort when trying to use AMD graphics card in editor
Model is mistral-7b-instruct-v0.2.Q4_K_M.gguf
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon RX…
-
### Python -VV
```shell
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[29], line …
-
```
import os
params = {'max_tokens': 1024, 'model': 'mistral-medium', 'random_seed': 13400, 'safe_prompt': False, 'temperature': 0.3, 'top_p': 1.0}
messages = [{'role': 'user', 'content': 'Give …
-
With replicate 0.24.0 Python client and "mistralai/mistral-7b-instruct-v0.2" (which is a model that supports streaming), the iterator I get back from client.run() is truncating output frequently, perh…
-
Hi,
When I specify the W&B integration incorrectly in my code (see "foo" below):
```
created_job = self.client.jobs.create(
model="open-mistral-7b",
training_files=[trai…
-
The [server](https://github.com/ggerganov/llama.cpp/tree/master/examples/server) example has been growing in functionality and unfortunately I feel it is not very stable at the moment and there are so…