-
I tried to save LlaSMol-Mistral-7B so I can tune it on my own dataset later, but cant understand how to do it correctly.
I treid :
```
from generation import LlaSMolGeneration
generator = LlaS…
-
Hi ,
When I try to run this command '!mistral-demo $7B_DIR', I encounter an issue related to the GPU. Could you please suggest a solution? I am using Google Colab
![issue pic ](https://github.com…
-
-
Currently our stop words are hardcoded, need to fix this to make our code work stably with mistral v0.3
Need to test following models before ship them:
- [x] mistral:7b-tensorrt-llm-linux-ampere
…
-
In the current list of default models there is an odd ommission. While most models also have a 32 Bit variant available, Mistral 7B does not.
The practical result is that Linux users are missing ou…
-
Dear Eagle Team:
Hello, and thank you very much for your excellent work for the community. Recently, while attempting to replicate Eagle, I encountered some issues that I have been unable to resolv…
-
### Bug Description
There is a missing comma, that prevents the usage of streaming for haiku and sonnet3.5 models:
from llama_index.llms.bedrock.utils import STREAMING_MODELS
STREAMING_MODELS
{'…
-
Mistral-7b is a much better model (and perhaps a teacher) than Llama-2-7b. Would you kindly release checkpoints for a distilled mistral? Would greatly appreciate it!
ojus1 updated
2 months ago
-
Hi Everyone ,
Am getting an error message with the Smart TextArea.
i configure the Ollama with "DeploymentName": "mistral:7b"
am trying to run am getting an error message like : Microsoft.AspN…
-
Using the following code yields a no-support error. Would love to see the model supported since it's currently one of the few Finnish-language LLMs.
```
from unsloth import FastLanguageModel
impo…