-
In this day of AI it's hardly much of a share, but:
```typescript
declare module '@mistralai/mistralai' {
class MistralClient {
constructor(apiKey?: string, endpoint?: string)
priva…
-
Hello Team,
Are you planning to add the ability to get structured outputs from your models ?
I have a simple implementation of it from the client (by basically adding a prompt in the background)…
-
Here is my script for using Mistral's tool call API.
```python
import functools
import json
import os
import pandas as pd
from mistralai.models.chat_completion import ChatMessage, Function
…
ekzhu updated
8 months ago
-
vllm 0.2.7 with cuda 12.1.
```
python -m vllm.entrypoints.openai.api_server --port=5002 --host=0.0.0.0 --model=TheBloke/dolphin-2.7-mixtral-8x7b-AWQ --seed 1234 …
-
```shell
node chat_with_streaming.js
Chat Stream:
It's subjective to determine theundefined:1
{"id": "cmpl-794d708af6ed43aeab6a2a81390c5d90", "object": "chat.completion.chunk", "created": 17026721…
-
Hi,
## Disclaimer
I was writing this issue while you answered #105 but now I'm confused as to wether i should post this one or not.
You said:
> Mistran is not supported
and
> don't giv…
-
### Your current environment
```text
# python3 collect_env.py
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build P…
-
I am trying to use two GPUs with tensor_parallel=2. It seems it only releases memory on one gpu. There is some process still running. The client.terminate_server doesn't seem to kill all processes. I…
-
Im loading mistral 7B instruct and trying to expose it using langserve. Im having problems when concurrence is needed. My code looks like this:
Model loading
```
from langchain_community.llms.hug…
-
**Describe the bug**
I'd like to deploy mistral 0.2 LLM on sagemaker [it seems that we need to have the hugging face llm version 1.3.3](https://github.com/huggingface/text-generation-inference/issu…
LvffY updated
10 months ago