ssbuild / chatglm_finetuning

chatglm 6b finetuning and alpaca finetuning
1.54k stars 176 forks source link

rebase最新代码报错: RuntimeError: CUDA error: device-side assert triggered #186

Closed leoluopy closed 1 year ago

leoluopy commented 1 year ago

rebase最新代码报错: RuntimeError: CUDA error: device-side assert triggered

详细日志:

Traceback (most recent call last): File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/launchers/subprocess_script.py", line 88, in launch return function(*args, *kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 650, in _fit_impl self._run(model, ckpt_path=self.ckpt_path) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run results = self._run_stage() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1191, in _run_stage self._run_train() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1214, in _run_train self.fit_loop.run() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 267, in advance self._outputs = self.epoch_loop.run(self._data_fetcher) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(*args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 213, in advance batch_output = self.batch_loop.run(kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(*args, *kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 88, in advance outputs = self.optimizer_loop.run(optimizers, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run self.advance(args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 202, in advance result = self._run_optimization(kwargs, self._optimizers[self.optim_progress.optimizer_position]) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 249, in _run_optimization self._optimizer_step(optimizer, opt_idx, kwargs.get("batch_idx", 0), closure) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 370, in _optimizer_step self.trainer._call_lightning_module_hook( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1356, in _call_lightning_module_hook output = fn(args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/core/module.py", line 1742, in optimizer_step optimizer.step(closure=optimizer_closure) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/core/optimizer.py", line 169, in step step_output = self._strategy.optimizer_step(self._optimizer, self._optimizer_idx, closure, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 280, in optimizer_step optimizer_output = super().optimizer_step(optimizer, opt_idx, closure, model, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 234, in optimizer_step return self.precision_plugin.optimizer_step( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 119, in optimizer_step return optimizer.step(closure=closure, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 69, in wrapper return wrapped(args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, *kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/optimizer/lion/lion.py", line 72, in step loss = closure() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 105, in _wrap_closure closure_result = closure() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 149, in call self._result = self.closure(*args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 135, in closure step_output = self._step_fn() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/optimizer_loop.py", line 419, in _training_step training_step_output = self.trainer._call_strategy_hook("training_step", kwargs.values()) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook output = fn(args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 351, in training_step return self.model(*args, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1156, in forward output = self._run_ddp_forward(inputs, kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1110, in _run_ddp_forward return module_to_run(*inputs[0], kwargs[0]) # type: ignore[index] File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/overrides/base.py", line 98, in forward output = self._forward_module.training_step(inputs, kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/transformer_base.py", line 529, in training_step outputs = self.compute_loss(batch) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/transformer_base.py", line 360, in compute_loss return self.model.compute_loss(kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/transformer_base.py", line 119, in compute_loss return self.model(*args,batch) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/chatglm/init.py", line 1200, in forward transformer_outputs = self.transformer( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/chatglm/init.py", line 1015, in forward layer_ret = layer( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/chatglm/init.py", line 619, in forward attention_outputs = self.attention( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, **kwargs) File "/home/algo_share/projects/peiyuanluo/workspace/chatglm_finetuning/deep_training/nlp/models/chatglm/init.py", line 462, in forward q1, k1 = apply_rotary_pos_emb_index(q1, k1, cos, sin, position_ids) RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: The following operation failed in the TorchScript interpreter. Traceback of TorchScript (most recent call last): RuntimeError: CUDA driver error: device-side assert triggered

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "train.py", line 180, in trainer.fit(pl_model, train_dataloaders=train_datasets) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 608, in fit call._call_and_handle_interrupt( File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 63, in _call_and_handle_interrupt trainer._teardown() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1175, in _teardown self.strategy.teardown() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/ddp.py", line 490, in teardown super().teardown() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/parallel.py", line 125, in teardown super().teardown() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 496, in teardown self.lightning_module.cpu() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/lightning_fabric/utilities/device_dtype_mixin.py", line 78, in cpu return super().cpu() File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 954, in cpu return self._apply(lambda t: t.cpu()) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 797, in _apply module._apply(fn) [Previous line repeated 3 more times] File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 820, in _apply param_applied = fn(param) File "/home/leo/.pyenv/versions/anaconda3-2020.11/lib/python3.8/site-packages/torch/nn/modules/module.py", line 954, in return self._apply(lambda t: t.cpu()) RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Epoch 0: 0%| | 0/3 [00:02<?, ?it/s]
terminate called after throwing an instance of 'c10::Error' what(): CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

Exception raised from c10_cuda_check_implementation at ../c10/cuda/CUDAException.cpp:44

leoluopy commented 1 year ago

仓库开发者哪些环境需要注意检查 ?

ssbuild commented 1 year ago

仓库开发者哪些环境需要注意检查 ?

是lora int8?readme好像说了。建议使用docker隔离环境。

leoluopy commented 1 year ago

config 如下: { "architectures": [ "ChatGLMModel" ], "auto_map": { "AutoConfig": "configuration_chatglm.ChatGLMConfig", "AutoModel": "modeling_chatglm.ChatGLMForConditionalGeneration", "AutoModelForSeq2SeqLM": "modeling_chatglm.ChatGLMForConditionalGeneration" }, "bos_token_id": 130004, "eos_token_id": 130005, "mask_token_id": 130000, "gmask_token_id": 130001, "pad_token_id": 3, "hidden_size": 4096, "inner_hidden_size": 16384, "layernorm_epsilon": 1e-05, "max_sequence_length": 2048, "model_type": "chatglm", "num_attention_heads": 32, "num_layers": 4, "position_encoding_2d": true, "torch_dtype": "float16", "transformers_version": "4.23.1", "use_cache": true, "vocab_size": 150528, "precision": 16, "quantization_bit": 0, "pre_seq_len": null, "prefix_projection": false }

leoluopy commented 1 year ago

@Time : 2023/1/22 16:22

@Author : tk

@FileName: data_utils.py

import copy import json import os import random import typing from enum import Enum

import numpy as np import torch from deep_training.data_helper import DataHelper, ModelArguments, TrainingArguments, DataArguments from deep_training.nlp.models.chatglm import ChatGLMConfig from deep_training.nlp.models.lora.v2 import LoraArguments from deep_training.utils.func import is_chinese_char from fastdatasets.record import load_dataset as Loader, RECORD, WriterObject, gfile from tqdm import tqdm from transformers import HfArgumentParser

from data_processer import DataStrategy, TokenTruncation, TokenSingleSliding, TokenDoubleSliding from models import ChatGLMTokenizer

lora_info_args = { 'with_lora': True, # 是否启用lora模块 'r': 8, 'target_modules': ['query_key_value'], 'target_dtype': 16, # 半精度 'lora_alpha': 32, 'lora_dropout': 0.1, 'bias': 'none', # Bias type for Lora. Can be 'none', 'all' or 'lora_only'" }

adalora_info_args = { 'with_lora': True, # 是否启用adalora模块 'r': 8, 'target_modules': ['query_key_value'], 'target_dtype': 16, # 半精度 'lora_alpha': 32, 'lora_dropout': 0.1, 'bias': 'none', # Bias type for Lora. Can be 'none', 'all' or 'lora_only'"

'target_r':8, # Target Lora matrix dimension.
'init_r': 12, #Intial Lora matrix dimension.
'tinit': 0, #The steps of initial warmup.
'tfinal': 0, #The steps of final warmup.
'deltaT': 1, #Step interval of rank allocation.
'beta1': 0.85, #Hyperparameter of EMA.
'beta2': 0.85, #Hyperparameter of EMA.
'orth_reg_weight': 0.5, #The orthogonal regularization coefficient.
'total_step': None, #The total training steps.
'rank_pattern': None, #The saved rank pattern.

}

train_info_args = { 'devices': 1, 'data_backend': 'record', 'model_type': 'chatglm',

预训练模型路径 , 从0训练,则置空

'model_name_or_path': '/home/leo/Downloads/chatglm-6b/',
'config_name': './config/config.json',
'tokenizer_name': '/home/leo/Downloads/chatglm-6b/',
'convert_onnx': False, # 转换onnx模型
'do_train': True,
'train_file':  [ './data/finetune_train_examples.json'],
'max_epochs': 20,
'max_steps': -1,
'optimizer': 'lion', # one of adamw,adam,lamb,lion

'scheduler_type': 'CAWR',
'scheduler':{'T_mult': 1, 'rewarm_epoch_num': 0.5, 'verbose': False},

# 'scheduler_type': 'linear',# one of [linear,WarmupCosine,CAWR,CAL,Step,ReduceLROnPlateau
# 'scheduler': None,

# 切换scheduler类型
# 'scheduler_type': 'WarmupCosine',
# 'scheduler': None,

# 'scheduler_type': 'ReduceLROnPlateau',
# 'scheduler': None,

# 'scheduler_type': 'Step',
# 'scheduler':{ 'decay_rate': 0.999,'decay_steps': 100,'verbose': True},

# 'scheduler_type': 'CAWR',
# 'scheduler':{'T_mult': 1, 'rewarm_epoch_num': 2, 'verbose': True},

# 'scheduler_type': 'CAL',
# 'scheduler': {'rewarm_epoch_num': 2,'verbose': True},

'optimizer_betas': (0.9, 0.999),
'train_batch_size': 4,
'eval_batch_size': 2,
'test_batch_size': 2,
'learning_rate': 2e-5,  #
'adam_epsilon': 1e-8,
'gradient_accumulation_steps': 1,
'max_grad_norm': 1.0,
'weight_decay': 0,
'warmup_steps': 0,
'output_dir': './output',
'max_seq_length': 1024, # 如果资源充足,推荐长度2048 与官方保持一致
'max_target_length': 100,  # 预测最大长度, 保留字段
'use_fast_tokenizer': False,
'do_lower_case': False,

##############  lora模块
#注意lora,adalora 和 ptuning-v2 禁止同时使用

'lora': {lora_info_args}, 'adalora': {adalora_info_args}, }

lora 模式暂时不支持deepspeed

enable_deepspeed = False

data_conf = { 'strategy': DataStrategy.truncation, # 数据策略选项 DataStrategy.truncation: { 'ensure_answer_min_length': 1, }, DataStrategy.singlesliding: { 'sliding_size': train_info_args['max_seq_length'] // 3 2, #prompt滑动窗口大小 'p':1, # p < 0 , 随机选举prompt }, DataStrategy.doublesliding: { 'sliding_size': train_info_args['max_seq_length'] // 3 2, #双滑滑动窗口大小 'p':1,# p < 0 , 随机选举prompt }, }

def get_deepspeed_config():

是否开启deepspeed

if not enable_deepspeed:
    return None
with open('./deepspeed.json', mode='r', encoding='utf-8') as f:
    deepspeed_config = json.loads(f.read())
return deepspeed_config

def preprocess(text):

text = text.replace("\n", "\n").replace("\t", "\t")

return text

def postprocess(text):

return text.replace("\n", "\n").replace("\t", "\t")

return text

class NN_DataHelper(DataHelper): index = 1 def on_data_ready(self): self.index = -1

# 切分词
def on_data_process(self, data: typing.Any, mode: str):
    self.index += 1
    prompt = data[0]
    answer = data[1]

    tokenizer: ChatGLMTokenizer
    config: ChatGLMConfig
    max_seq_length = self.max_seq_length_dict[mode]
    tokenizer = self.tokenizer
    config = self.config

    if not hasattr(self, 'sptoken'):
        self.sptoken = tokenizer.encode(text="")[-2:]

    a_ids = tokenizer.encode(text=prompt, add_special_tokens=False)
    b_ids = tokenizer.encode(text=answer, add_special_tokens=False)

    strategy = data_conf['strategy']
    if strategy == DataStrategy.truncation:
        ds = TokenTruncation.process(tokenizer,config,a_ids, b_ids, max_seq_length, self.sptoken ,**data_conf[strategy])
    elif strategy == DataStrategy.singlesliding:
        ds = TokenSingleSliding.process(tokenizer,config, a_ids, b_ids, max_seq_length, self.sptoken, **data_conf[strategy])
    elif strategy == DataStrategy.doublesliding:
        ds = TokenDoubleSliding.process(tokenizer,config, a_ids, b_ids, max_seq_length, self.sptoken, **data_conf[strategy])
    else:
        raise ValueError('Invlid strategy',strategy)

    if not ds:
        return None

    if self.index < 3:
        print(ds[0])
    return ds

# {
#     "id": 0, "paragraph": [
#     # 一轮会话
#     {
#         "q": "从南京到上海的路线",
#         "a": [
#             "你好,南京到上海的路线如下:",
#             "1. 南京到上海,可以乘坐南京地铁1号线,在南京站乘坐轨道交通1号线。",
#             "2. 南京到浦东机场,可以搭乘上海地铁1号,在陆家嘴站乘坐地铁1线,在浦东国际机场站乘坐机场快线,前往上海浦东国际机场。",
#             "3. 上海到南京,可以换乘上海地铁2号线,从南京站换乘地铁2线,再从南京南站换乘地铁1路,然后到达上海站"
#         ]
#     }
#     # 二轮....
# ]
# }

# 读取文件
def on_get_corpus(self, files: typing.List, mode: str):
    D = []
    for file in files:
        with open(file, mode='r', encoding='utf-8', newline='\n') as f:
            lines = f.readlines()

        for line_id, line in enumerate(lines):
            jd = json.loads(line)
            if not jd:
                continue
            paragraph = jd['paragraph']
            if line_id < 10:
                print(paragraph)
            paragraph = [(preprocess(session['q']),preprocess('\n'.join(session['a']))) for session in paragraph]
            for sid,(q,a) in enumerate(paragraph):
                assert len(a),ValueError('answer cannot empty')
                if sid == 0:
                    D.append((q, a))
                else:
                    prompt_text = ''
                    for j in range(sid + 1):
                        if j == sid:
                            prompt_text += "[Round {}]\n问:{}\n答:".format(sid, paragraph[j][0])
                        else:
                            prompt_text += "[Round {}]\n问:{}\n答:{}".format(j, paragraph[j][0], paragraph[j][1])
                    D.append((prompt_text,a))
    return D

def collate_fn(self,batch):
    if not hasattr(self,'sptoken'):
        self.sptoken = self.tokenizer.encode(text="")[-2:]

    o = {}
    for i, b in enumerate(batch):
        if i == 0:
            for k in b:
                o[k] = [torch.tensor(b[k])]
        else:
            for k in b:
                o[k].append(torch.tensor(b[k]))
    for k in o:
        o[k] = torch.stack(o[k])

    max_len = torch.max(o.pop('seqlen')).tolist()
    b_input_ids = o['input_ids'][:, :max_len]
    ctxlens = o.pop('ctxlen')  # 兼容旧版本数据
    if ctxlens is None:
        ctxlens = [None] * len(b_input_ids)

    b_position_ids,b_attention_mask = [],[]
    for input_ids,context_length in zip(b_input_ids,ctxlens):
        context_length = context_length.squeeze(dim=-1)
        mask_position = context_length - 1
        position_ids = list(range(context_length)) + [mask_position] * (max_len - context_length)
        block_position_ids = [0] * context_length + list(range(1, max_len - context_length + 1))

        attention_mask = torch.ones((1, max_len, max_len))
        attention_mask = torch.tril(attention_mask)
        attention_mask[..., :context_length] = 1
        attention_mask = (attention_mask < 0.5)

        b_position_ids.append(torch.stack((torch.tensor(position_ids),torch.tensor(block_position_ids))))
        b_attention_mask.append(attention_mask)

    b_attention_mask = torch.stack(b_attention_mask, dim=0)
    b_position_ids = torch.stack(b_position_ids,dim=0)

    o['input_ids'] = b_input_ids.long()
    o['attention_mask'] = b_attention_mask.bool()
    o['position_ids'] = b_position_ids.long()
    o['labels'] = o['labels'][:, :max_len].long()
    return o

if name == 'main': parser = HfArgumentParser((ModelArguments, TrainingArguments, DataArguments, LoraArguments)) model_args, training_args, data_args, lora_args = parser.parse_dict(train_info_args) lora_args = lora_args.config

dataHelper = NN_DataHelper(model_args, training_args, data_args)
tokenizer, config, _,_ = dataHelper.load_tokenizer_and_config(tokenizer_class_name=ChatGLMTokenizer,config_class_name=ChatGLMConfig)
assert tokenizer.eos_token_id == 130005

# 缓存数据集
# 检测是否存在 output/dataset_0-train.record ,不存在则制作数据集
if data_args.do_train:
    dataHelper.make_dataset_with_args(data_args.train_file,mixed_data=False,shuffle=True,mode='train')
if data_args.do_eval:
    dataHelper.make_dataset_with_args(data_args.eval_file, shuffle=False,mode='eval')
if data_args.do_test:
    dataHelper.make_dataset_with_args(data_args.test_file, shuffle=False,mode='test')

# def shuffle_records(record_filenames, outfile, compression_type='GZIP'):
#     print('shuffle_records record...')
#     options = RECORD.TFRecordOptions(compression_type=compression_type)
#     dataset_reader = Loader.RandomDataset(record_filenames, options=options, with_share_memory=True)
#     data_size = len(dataset_reader)
#     all_example = []
#     for i in tqdm(range(data_size), desc='load records'):
#         serialized = dataset_reader[i]
#         all_example.append(serialized)
#     dataset_reader.close()
#
#     shuffle_idx = list(range(data_size))
#     random.shuffle(shuffle_idx)
#     writer = WriterObject(outfile, options=options)
#     for i in tqdm(shuffle_idx, desc='shuffle record'):
#         example = all_example[i]
#         writer.write(example)
#     writer.close()
#
#
# # 对每个record 再次打乱
# for filename in dataHelper.train_files:
#     shuffle_records(filename, filename)
ssbuild commented 1 year ago

词典权重从官网重新下载一下。

leoluopy commented 1 year ago

应该是 lora F16 的训练配置。

leoluopy commented 1 year ago

@ssbuild 谢谢大佬

ssbuild commented 1 year ago

@ssbuild 谢谢大佬

如果不行就把代码拉一下,看到你的词典还是15w。