mymusise / ChatGLM-Tuning

基于ChatGLM-6B + LoRA的Fintune方案
MIT License
3.73k stars 440 forks source link

[BUG] data pre process bug #217

Closed ticoAg closed 1 year ago

ticoAg commented 1 year ago

tokenize_dataset_rows.py

def read_jsonl(path, max_seq_length, skip_overlength=False):
    model_name = "THUDM/chatglm-6b"
    tokenizer = transformers.AutoTokenizer.from_pretrained(
        model_name, trust_remote_code=True)
    config = transformers.AutoConfig.from_pretrained(
        model_name, trust_remote_code=True, device_map='auto')
    with open(path, "r") as f:
        for line in tqdm(f.readlines()):
            example = json.loads(line)
            feature = preprocess(tokenizer, config, example, max_seq_length)
            if skip_overlength and len(feature["input_ids"]) > max_seq_length:
                continue
            feature["input_ids"] = feature["input_ids"][:max_seq_length]
            yield feature

if not skip_overlength and len(feature["input_ids"]) > max_seq_length :max_seq_length操作会硬截断 导致feature["input_ids"] 缺失tokenizer.eos_token_id,从而这部分数据可能会导致模型不知道在什么时候停止生成...... by the way add_special_tokens中 <sop> <eop> [gMASK]这几个特殊token干嘛的

mymusise commented 1 year ago

确实有这个问题,可以先把skip_overlength打开解决

mymusise commented 1 year ago
[gMASK] 这几个可以参考下GLM的论文,有具体说明,大概是为 prefix decoder 设计,识别出prompt部分