-
### Your current environment
```text
The output of `python collect_env.py`
```
Collecting environment information...
PyTorch version: 2.1.2+cu121
Is debug build: False
CUDA used to build PyTorc…
-
**Describe the bug**
Local LLMs either raise Timeout error or Fails to parse output.
Ragas version: 0.1.15
Python version: 3.11.3
**Code to Reproduce**
```python
from transformers import Aut…
-
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[20], [line 2](vscode-notebook-cell:?ex…
-
Hi authors,
Thank you for your nice library! I am trying to use your library to run mistral 7B with CoT on gsm8k. I have several questions on the code when using `HFModel`:
- Which mistral 7B m…
-
Would be great to support this model in the sample scripts:
https://mistral.ai/news/mistral-large-2407/
https://huggingface.co/mistralai/Mistral-Large-Instruct-2407
-
Hello you all keep scratching my head why sometimes I can deploy all on list but stuff I find having issues
anyways this is my logs just trying to use this repo https://huggingface.co/mistralai/Mis…
-
I am processing a doc with augmentoolkit and encountering an error/exception, "list index out of range" in generation_functions/engine_wrapper_class.py
----------------------------------------…
-
### Description
Using Node.js (v20.12.2) and typescript 5.4.5
```
tsc -p tsconfig.json
```
```
node_modules/ai/dist/index.d.ts:5:45 - error TS2307: Cannot find module '@mistralai/mistralai' …
-
The current `HuggingFaceInferenceSUT` uses the `chat_completion` API. WS3 pointed out that not all models are accessible via this API (e.g. mistralai/Mistral-Nemo-Instruct-2407).
-
### The model to consider.
https://huggingface.co/mistralai/Mistral-Large-Instruct-2407
### The closest model vllm already supports.
_No response_
### What's your difficulty of supporting the mode…