-
- in order to MLC 0.5 40k prompts into trainable data, we need (potentially dangerous) SUT responses, and we need pseudo ground truth labels
- this covers the first step, we need SUT responses
- inp…
-
Mistral-7b is a much better model (and perhaps a teacher) than Llama-2-7b. Would you kindly release checkpoints for a distilled mistral? Would greatly appreciate it!
ojus1 updated
3 months ago
-
Thank you for your work.
How can the current code be modified to use llava-v1.6-mistral-7b for training this medical model?
-
Hi,
I successfully installed the unsloth library by following the instructions in #210. However, I encountered an issue when running trainer_stats = trainer.train() inside my VS Code virtual enviro…
-
Hi Everyone ,
Am getting an error message with the Smart TextArea.
i configure the Ollama with "DeploymentName": "mistral:7b"
am trying to run am getting an error message like : Microsoft.AspN…
-
### Feature Description
New 7B coding model just released by Mistral.
- **Blog Post**: https://mistral.ai/news/codestral-mamba/
- **HF**: https://huggingface.co/mistralai/mamba-codestral-7B…
-
Can we fine-tune with Mistral for a custom dataset in the field of digital marketing/marketing communication?
-
Using the following code yields a no-support error. Would love to see the model supported since it's currently one of the few Finnish-language LLMs.
```
from unsloth import FastLanguageModel
impo…
-
Do you have the plan to support Mistral 7B model since it outperforms both LLaMA-1 and LLaMA-2? (https://mistral.ai/news/announcing-mistral-7b/)
Thanks!
-
# Prerequisites
Please answer the following questions for yourself before submitting an issue.
- [x ] I am running the latest code. Development is very rapid so there are no tagged versions as o…