-
Hello, I ran the anaconda instructions but when I try to use accelerate launch main.py --config config/align_prop.py:hps or accelerate launch main.py --config config/align_prop.py:evaluate I receive t…
-
/home/ma-user/anaconda3/envs/chatglmeV2/bin/python /home/ma-user/work/zhanghongjun/ChatGLM-Efficient-Tuning-main/src/train_ppo.py --do_train --dataset alpaca_gpt4_en --finetuning_type lora --checkpoin…
-
The Linux GPU instructions say:
An example of running h2oGPT via docker using AutoGPTQ (4-bit, so using less GPU memory) with LLaMa2 7B model is:
```
mkdir -p $HOME/.cache
mkdir -p $HOME/save
e…
-
**Describe the bug**
When use hybrid_engine + bloomz, zero2. An error was reported, it seems to tell me that bloomz does not support hybrid_engine
**Log output**
```
Traceback (most recent call …
-
Hello Team,
I run the program on RHEL 8.x, and my GPU is A100 with 20GB Memory. I follow all along the installation step based on document. Then when i run this command to launch:
python generate.…
-
Hi, everyone. My question is, the steps mentioned for fine tune Stack_llama model. So should I need to run all the step in once. ?
```
There were three main steps to the training process:
Super…
-
您好,我是用chatglm-6b跑第三阶段rm时报错,我看reward_modeling.py文件里没有处理chatglm类的model,问一下现在是rm、rl阶段没办法跑chatglm模型吗,近期会出这两阶段chatglm模型的测试吗,如果只使用pt、sft、阶段训练针对领域数据的话对模型的影响大吗?
-
### 🐛 Bug
Create experiments sections says "something went wrong". Error report from LLM Studio is below.
### To Reproduce
q.app
script_sources: ['/_f/f2c95323-c429-4fb7-96ad-67e39b51a8dc/…
-
### 🐛 Describe the bug
This is a follow-up from issue #399.
I am facing this same issue even with the updated code.
I am trying to reproduce the HH fine-tuning example on Alpaca.
```
trlx…
-
Thank you for sharing this awesome theme with us. I found a small issue, where the color of the macro button ca not be changed. This is how it looks like:
![image](https://github.com/OrloDavid/Kustom…