-
Hi,
I am trying to prune Mistral 7B (https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) and while I was able to successfully run the commands for magnitude pruning, I was facing issues with…
-
Dear Eagle Team:
Hello, and thank you very much for your excellent work for the community. Recently, while attempting to replicate Eagle, I encountered some issues that I have been unable to resolv…
-
### Your current environment
The output of `python collect_env.py`
OS: Ubuntu 22.04.2 LTS (x86_64)
GCC version: (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Clang version: Could not collect
CMake vers…
-
## Describe the bug
Not sure if this is a bug but perhaps it could then benefit from another example.
According to the [Rust example](https://github.com/EricLBuehler/mistral.rs/blob/master/mistralr…
-
-
**Describe**
Thank you for your team's contribution! I would like to fine-tune E5-mistral-7b-instruct for tasks that interest me. Do you have plans to open-source training code? Alternatively, are th…
-
**Describe the bug**
I've been trying to get the chat to work with llama3, llama3.1, mistral, codellama:7b-instruct, codegemma:7b-instruct but it always fails ("Sorry, I don't understand").
**To R…
-
The following model names will no longer be available via API:
* llama-3-sonar-small-32k-online
* llama-3-sonar-large-32k-online
* llama-3-sonar-small-32k-chat
* llama-3-sonar-large-32k-chat…
-
Dear authors,
in addition to bge-series, I would like to see how other embedding model perform on my own custom dataset.
I was wondering if I can use the following fine-tuning script for fine-tu…
-
Overview:
We want to use pytest's parameterize decorator function to enable the testing of multiple models of a provider.
For example:
We automatically test the LLM's default.
```
@pytest.m…