THUDM / ChatGLM-6B

ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
Apache License 2.0
40.56k stars 5.2k forks source link

[BUG/Help] 如何做问答类的P-Tunning?如何保证每次提问都能按原文回答? #1185

Open zhurunhua opened 1 year ago

zhurunhua commented 1 year ago

Is there an existing issue for this?

Current Behavior

我用 6B-INT4 做 P-Tunning ,预期是提问后,模型按原文回答,请问一下该怎么准备数据?或者该怎么去训练模型?试了好几次 回答都有偏差,或者换一个提问顺序,回答又不对了

Expected Behavior

No response

Steps To Reproduce

训练数据格式为:

{"context":"xxxx", "instruction": "根据输入的问题,用原文进行回答。","question":"xxxx?","answer":"xxxx"}

Environment

- OS: Windows 10
- Python: 3.10
- Transformers: 4.27.1
- PyTorch: 1.18
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) : True

Anything else?

No response

codeAndxv commented 1 year ago

结合本地知识库可以实现

zhurunhua commented 1 year ago

可以麻烦详细说明下吗?刚接触,试验了很多次都不理想

freelancerllm commented 1 year ago

可以麻烦详细说明下吗?刚接触,试验了很多次都不理想

https://github.com/imClumsyPanda/langchain-ChatGLM