Open zhurunhua opened 1 year ago
我用 6B-INT4 做 P-Tunning ,预期是提问后,模型按原文回答,请问一下该怎么准备数据?或者该怎么去训练模型?试了好几次 回答都有偏差,或者换一个提问顺序,回答又不对了
No response
训练数据格式为:
{"context":"xxxx", "instruction": "根据输入的问题,用原文进行回答。","question":"xxxx?","answer":"xxxx"}
- OS: Windows 10 - Python: 3.10 - Transformers: 4.27.1 - PyTorch: 1.18 - CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True
结合本地知识库可以实现
可以麻烦详细说明下吗?刚接触,试验了很多次都不理想
https://github.com/imClumsyPanda/langchain-ChatGLM
Is there an existing issue for this?
Current Behavior
我用 6B-INT4 做 P-Tunning ,预期是提问后,模型按原文回答,请问一下该怎么准备数据?或者该怎么去训练模型?试了好几次 回答都有偏差,或者换一个提问顺序,回答又不对了
Expected Behavior
No response
Steps To Reproduce
训练数据格式为:
Environment
Anything else?
No response