-
### feature
Hi! Mistral-7B seems to be a great opensource LLM, do you have plans on integrating it into LLAVA? Thanks!
-
**Describe the bug**
Chat window is empty with "Nothing to show"
**Information about your version**
I have built tabby from v0.17.0 with command `cargo --features rocm build`
**Information abo…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
On `"@slack/bolt": "^3.19.0"`, I got a strange bug where a `static_select` element from Block Builder do not display the `initial_option` correctly in my Slack app.
I'm using `"slack-block-builder"…
-
Hello,
Thanks for making such an awesome tool - knowing nothing about LLM's and HuggingFace, using LLM I was able to install models quantized with llama.cpp in GGUF format. I have a question about …
-
We are able to download the granite model using below command
ilab download --repository instructlab/granite-7b-lab-GGUF --release main --filename granite-7b-lab-Q4_K_M.gguf
ilab generate is worki…
-
I encountered an issue when trying to export a GGUF model file for Mistral Nemo and Mistral 7B finetunes using the `unsloth` library. The error occurs during the `save_pretrained_gguf` function call, …
-
Issue Description
Problem: When using the mix generate text command with verbose set to false, and the following parameters:
Temperature: 0.1 or 0
Top p: 1
The LLM models seem to hallucinate mor…
-
### Environment
Conda environment:
python=3.10
mergekit commit f086664c983ad8b5f126d40ce2e4385f9e65f32c (latest as of yesterday)
transformers from git @ git+https://github.com/huggingface/transfo…
-
Any reasons why mistralai_mistral-7b-instruct-v0.2 does not offload on gpu ?
load INSTRUCTOR_Transformer
max_seq_length 512
Starting get_model: llama
Failed to listen to n_gpus: No modu…