-
### System Info
TGI Version: v2.0.4
Model: `mistralai/Mixtral-8x22B-Instruct-v0.1`
Hardware: 8x Nvidia H100 70GB HBM3
Deployment specificities: OpenShift
### Information
- [X] Docker
-…
-
### Description
Hi,
I'm using this API implementation with both OpenAI models and MistralAI ones.
My implementation works perfectly with both, except for the tool_calls function which is the on…
doc- updated
4 months ago
-
### System Info
4.41.0 or others
python 3.10
Ubuntu 22
### Who can help?
@ArthurZucker and @younesbelkada
### Information
- [x] The official example scripts
- [ ] My own modified scripts
##…
-
Important attributes such as `openinference.span.kind` should be inserted only at the very end.
-
I use the PPOTrainer on Mixtral with 8 GPUs whose CUDA version is 12.4. Would you happen to have any idea about solving the following issue? (Also, I have updated all python packages)
Here is the e…
-
### System Info
- CPU architecture: x86_64
- CPU/Host memory size: 126G
- GPU properties
- GPU name: L4
- GPU memory size: 24GB
- Libraries
- TensorRT-LLM branch or tag (e.g., main, v0.…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
## Prerequisites
- [x] I read the [Deployment and Setup](https://docs.opencti.io/latest/deployment/overview) section of the OpenCTI documentation as well as the [Troubleshooting](https://docs.openc…
-
**Is your feature request related to a problem? Please describe.**
Currently the only provided conversion script is https://github.com/NVIDIA/NeMo/blob/main/scripts/checkpoint_converters/convert_mi…
-
**Description:**
I encountered an issue while deploying the Mistral Large model on Azure through an AI hub. When using the method `builder.AddMistralChatCompletion(mistralModelName, mistralApiKey, mi…