-
First of all, thanks for the great open source library!
The docs promise a few more additional metrics that I'm not seeing in vLLM 0.3.0, have these been removed? I.e. if I hit `/metrics` of the Op…
-
```py
# install DSPy: pip install dspy
import dspy
# This sets up the language model for DSPy in this case we are using mistral 7b through TGI (Text Generation Interface from HuggingFace)
mistra…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
A recent PR pinned the minimum pyarrow version to `^15.0.0`, but that's a fairly new version -- jan 21, 2024 -- and much of the pydata ecosystem is not there yet, e.g., the latest nvidia rapids is onl…
-
### Description
The code below fails in parallel mode (works with no parallelizm).
We had this issue several times 🙁 - iterative/dvcx#1620
```python
import os
from mistralai.client impo…
-
TogetherAI just [announced ](https://www.together.ai/blog/function-calling-json-mode) JSON mode and function calling for their models. It currently supports these models: Mixtral, Mistra, and CodeLlam…
-
i am getting import failed with latest update, on comfy ui v0.2.7
-
Currently llamafiles are supported as generic OAI-compatible servers. While this works, it forces us to specify a valid (i.e. hf-transformers loadable) model name - which is needed for vLLM but might …
-
in the latest release we are getting following error while installing private GPT
```
LLM model downloaded!
Downloading tokenizer mistralai/Mistral-7B-Instruct-v0.2
Traceback (most recent call l…
-
My crew was supported by Mistral AI API before at v0.55.
However it doesn't work in v.0.14.3. Is there anything I missed ? thank you.
my snippet code :
```python
from langchain_mistralai.c…