-
Hi.
I am executing this code to merge Phi-3 models:
https://github.com/Leeroo-AI/mergoo/blob/main/notebooks/integrate_phi3_experts.ipynb
The result looks like this:
```text
ll integrate_phi3_…
-
## ❓ General Questions
When converting the fine-tuned qwen1.5-0.5B-chat model through MLC-llm, the following error occurs. The same error does not occur when converting the fine-tuned qwen1.5-1.8…
-
### Please check that this issue hasn't been reported before.
- [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports.
…
-
I'm having a problem with fine-tuning the codellama-7b-Instruct model for a programming language. The issue is that the model seems to focus too much on the new dataset , and its performance isn't gre…
-
### Your current environment
```text
Collecting environment information...
PyTorch version: N/A
Is debug build: N/A
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.…
-
### Question Validation
- [X] I have searched both the documentation and discord for an answer.
### Question
pfb the code
`
import torch
from transformers import BitsAndBytesConfig
from …
-
When I run run_llava.py, I get the following error:
Traceback (most recent call last):
File "/home/MMMU-main/eval/run_llava.py", line 106, in
main()
File "/home/MMMU-main/eval/run_llava.…
-
Hi!
I am currently using the hugging face sfttrainer to fine-tune my own model. I am saving the model to weights and bias then downloading adapter weights to my computer to us llama.cpp/convert-lor…
-
决策缺乏科学性、连贯性和纠错机制
对于审核任务来说,不同阶段的审核任务,对precision和recall有不同要求,比如在支小宝事前审核,更关注precision,因为不想打扰用户。而在事后的巡检阶段,更关注recall。
越做护城河越深;
判断标志是什么?
连续的问答
概念-周边-金字塔-砸烂
---
就业率能否复苏其实不看这些头部产业的。 大部分人的受教…
-
### Describe the issue
Issue:
Hi @haotian-liu , help me.
I have download llama-2 weights、 llava-150k、pretrain_mm_mlp_adapter. I just want to test the correctness of the program .
But program ex…