-
## Goals
- collect data on HyDE effectiveness with open source LLMs
## Tested LLMs
- [x] Mistral 7B instruct v0.2
- [x] Llama2 7B chat hf
- [x] Zepyher-7B beta
## Benchmark Results
##…
-
With llama 3.1 has been introduced to help support multi-turn reasoning.
End of message. A message represents a possible stopping point for execution where the model can inform the executor that a…
-
### System Info
I am using a Tesla T4 16 gb
### Reproduction
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
base_model_id = "mistralai/Mistral-7B-…
-
I finetuned llava-phi3 model with lora, but when I try to convert the resulting weight, an error occurred.
This I my command:
xtuner convert pth_to_hf ./my_configs/llava_phi3_mini_qlora_clip_vit_l…
-
Hello!
Thank you for your great work, I have following question :
I have my own Elyza7B checkpoint that I want to finetune on VQA task. If I follow the llava training scheme closely, I think we n…
-
### System Info
python 3.10.10
torch 2.3.1
transformers 4.43.2
optimum 1.17.1
auto_gptq 0.7.1
bitsandbytes 0.43.2
accelerate …
-
**Describe the bug**
Hi, I used the llm-compressor quantization script in a Colab notebook, but it gives me an error concerning the torch dtype.
**Expected behavior**
The quantized model should b…
Lue-C updated
2 weeks ago
-
I want to finetune `MPT-7B-Instruct` but i don't know how ?
Is `mpttune` can help me with this ?
-
I fine-tuned llama3-8b with Lora and followed the tutorial in the repository to convert the final result into `model.pth`. However, when I try to load the fine-tuned weights into the model using `Auto…
-
```
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.10/site-packages/mmengine/runner/_flexible_runner.py", line 1271, in call_hook
getattr(hook, fn_name)(self, **kwargs)…