-
I have followed the instructions of `finetune_lora.sh` and got the trained model.
this is my `finetune_lora.sh`
```
#!/bin/bash
################## VICUNA ##################
PROMPT_VERSION=v1
…
-
Hi, I am trying to finetune llama on commonsense_170k. However, I find the when the loss value is around 0.6, it almost does not decrease. Is it normal?
` WORLD_SIZE=2 CUDA_VISIBLE_DEVICES=1,2,3,4 …
-
After completing the book and going through the design patterns, review the book itself.
Specifically for this book, need to go into whether the examples for the different design patterns made sens…
Vlek updated
2 months ago
-
This paper is great. I want to train this model on my own dataset. Can this code be used to train the model?
-
**What kind of device or service you would like to see an adapter for?**
NFL team results and updates
**Is the device connected to the internet or only in the local network available?**
yes
**Is…
-
**Is your feature request related to a problem? Please describe.**
Currently models from DARTS are not supported in sktime.
**Describe the solution you'd like**
Since the DARTS models use a unifi…
-
Hi,thanks for your great work!
When I try to reproduce the results with commonssense reasoning datasets, it turns out to be not good as the table. The set I use is the same as the math resoning task…
-
- [ ] Engine
- [x] Step fusing [@haifeng-jin]
- [ ] TPU support
- [x] tf.distribute support
- [ ] DTensor support
- [ ] Sparse inputs support
- [x] Stack trace filtering [@fchollet]
…
-
When I'm doing the evaluation, should I use _--load_8bit_? I'm trying to reproduce the results of LLaMa-7B-LoRA
Finetune:
`CUDA_VISIBLE_DEVICES=8 python finetune.py --base_model 'yahma/llama-7b-…
-
Hello,
I'm using the following script to fine tune the llama3 model with a custom dataset of questions & responses using the `{'prompt: "", completion:""}` format defined [here](https://github.com/…