-
跑https://colab.research.google.com/github/shibing624/MedicalGPT/blob/main/run_training_dpo_pipeline.ipynb#scrollTo=J5kYehpzESyt (run_training_dpo_pipeline.ipynb)这个脚本时,pretrain阶段报错
-
### Describe the Question
Please provide a clear and concise description of what the question is.
徐老师,运行您给的notebook报错
![image](https://github.com/shibing624/MedicalGPT/assets/72805517/560cea89-3abc…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
https://github.com/shibing624/MedicalGPT
参考这个项目,预训练,指令微调,rm模型训练,ppo都有现成的。
### Expected Beha…
-
各位大佬,对chatglm3进行预训练运行pretraining.py时报错:
RuntimeError: CUDA error: device-side assert triggered
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
想问一下怎么解决啊
每次都是训练到134步时报错,之前训练都是…
-
我试图用
CUDA_VISIBLE_DEVICES=0,1 torchrun --nproc_per_node 2
这样并行训练,会直接报错
```
ValueError: You can't train a model that has been loaded in 8-bit precision on multiple devices in any distributed mode.…
-
"本项目开源了基于ChatGLM-6B LoRA 16-bit指令微调的中文医疗通用模型。模型呢?
按照快速提示提供的步骤无法运行.
laszo@LAPTOP-6MNNHCID:~$ . myvenv/bin/activate
(myvenv) laszo@LAPTOP-6MNNHCID:~$ cd /mnt/d/dev/code/MedicalGPT-zh/
(myvenv) l…
-
### Describe the Question
Please provide a clear and concise description of what the question is.
大佬可以提供一个关于从预训练到SFT再到RLHF的各个阶段训练即推理的例子吗,把这几个串一下,比如预训练后,推理测试,感觉ok后,再进入SFT阶段,完后再推理测试,以此类推,这样有利于大家一起来讨论这…
-
推断没问题,是否支持训练微调?(我尝试时,尚不支持)
-
Traceback (most recent call last):
File "F:\xiazai\MedicalGPT-main\pretraining.py", line 781, in
main()
File "F:\xiazai\MedicalGPT-main\pretraining.py", line 722, in main
trainer = Sa…
-
### Is your feature request related to a problem? Please describe.
_No response_
### Solutions
如何在领域数据上二次预训练
### Additional context
_No response_