-
Hi! First of all thanks a lot for such amazing project.
I wonder if there is a valid way to fine-tune the model for specific tasks using customized datasets? I am trying to adapt the model to impro…
-
I have completed to do the instracution tuning with code_alpaca_20k.json.
```
deepspeed instruct_tune_codet5p.py \
--load /home/ubuntu/ChatGPT/Models/Salesforce/codet5p-6b --save-dir output/inst…
-
Hi,
Can you please give us instructions about fine-tuning deepseekv2 model? Can we use `finetune.py` script from `DeepSeek-MoE`
https://github.com/deepseek-ai/DeepSeek-MoE/blob/main/finetune/fine…
-
### Question
Compared `./scripts/v1_5/finetune.sh` and `./scripts/v1_5/finetune_lora.sh`. When running `finetune.sh`, if `mm_projector_lr` is not specified, the parameters of `mm_projector` will no…
-
In FinGPT-v1, there is only 1 type of instruction being used as model input during fine-tuning:
`"instruction": "What is the sentiment of this news? Answer:{very negative/negative/neutral/positive/v…
-
When looking into #68578, I found that the implementation of `runtime.procyield` on GOARCH=arm64 uses the `YIELD` instruction, and that the `YIELD` instruction is in effect a (fast) `NOP`.
The curr…
rhysh updated
2 months ago
-
I am really inspired of and thank for your nice work.
The question is "Why is text encoder frozen when training?".
When I fine-tune VISTA model using other dataset such as M-BEIR, the results wi…
-
- Paper name: From Quantity to Quality: Boosting LLM Performance with Self-Guided Data Selection for Instruction Tuning
- ArXiv Link: https://arxiv.org/abs/2308.12032
To close this issue open a …
-
Hi everyone,
I trained a Lora, now I enhanced my dataset and I would like to fine tune my trained Lora..
1. Do you know how to do it? Are there instructions for this?
I tried to change model …
-
# URL
- https://arxiv.org/abs/2310.03744
# Affiliations
- Haotian Liu, N/A
- Chunyuan Li, N/A
- Yuheng Li, N/A
- Yong Jae Lee, N/A
# Abstract
- Large multimodal models (LMM) have recently sh…