-
**Description:**
I encountered an issue while deploying the Mistral Large model on Azure through an AI hub. When using the method `builder.AddMistralChatCompletion(mistralModelName, mistralApiKey, mi…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
Hi,
very nice work, I have tried this notebook:
https://github.com/labmlai/inspectus/blob/main/notebooks/hf_phi.ipynb
While this is working well, I have tried to replicate just changing mode…
-
I am trying to use the the crewai_tools, the RAG tools in particular. Lets take the DirectorySearchTool() in this case.
How do I use a custom embedder provider? The only options that embed chain ha…
-
### System Info
- `transformers` version: 4.37.0.dev0
- Platform: Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.11.0
- Huggingface_hub version: 0.20.1
- Sa…
-
The Model [mlx-community/Mistral-Nemo-Instruct-2407-8bit](https://huggingface.co/mlx-community/Mistral-Nemo-Instruct-2407-8bit) was converted to MLX format from [mistralai/Mistral-Nemo-Instruct-2407](…
-
### Self Checks
- [X] This is only for bug report, if you would like to ask a question, please head to [Discussions](https://github.com/langgenius/dify/discussions/categories/general).
- [X] I have s…
-
### Feature request
I would like to implement the Mixtral model in Flax
### Motivation
I am in the process of learning Flax and I have almost finished the model conversion to FLAX.
### Your contri…
-
I have executed the following command line:
```shell
python generate.py \
--base_model=mistralai/Mistral-Nemo-Instruct-2407 --use_gpu_id=True --gpu_id=-1 --max_seq_len=8192 \
--user_path=/…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…