-
Hi, thanks again for the amazing work here! When I try to fine tune the model with our sample data, I was able to initialize some parts of the training but I got the following issue related to "cpu "i…
-
So I finetuned a model using a custom dataset. The output should be in JSON format. All the keys are the same for each output, i.e. structure of the response JSON is the same while values need to be e…
-
Hi 👋🏻 Do you have any inference examples that I could use?
-
Dear authors of VideoLLaMA2,
Thanks for the great work. We tried to reproduce your results on vllava datasets using the latest version of the code. However, we observe a large discrepancy in the thre…
-
### Describe the issue
Issue:
When I run finetune_qlora.sh on v100 for finetuning llama2 I get CUDA error. Do you konw how to solve it? Thanks a lot. @haotian-liu
Command:
```
Here is configur…
-
How would you finetune in this style with an instruction finetuning data set like Open-Orca?
-
-
可以一些对大模型高效的剪枝方法,如SparseGPT:https://arxiv.org/abs/2301.00774
除了训练一个全连接的小模型,对大模型进行剪枝后得到的稀疏神经网络说不定也是个可行的思路
-
Hi,
Thank you for awesome library!
I am using `litgpt version 0.4.11`
Currently I am using `Phi-3.5-mini-instruct` to finetune using lora. Even though I set `--train.max_seq_length 10000` I st…
-
**Description:** I am experiencing issues using my GPU (Quadro K2200) with the latest software. Below is the log output when I try to load a model.
**Steps Taken:**
1. Initially, I was using the…