bojone / bert4keras

keras implement of transformers for humans
https://kexue.fm/archives/6915
Apache License 2.0
5.37k stars 929 forks source link

task_question_answer_generation_by_seq2seq.py 修改成多gpu报错 #505

Open murray-z opened 1 year ago

murray-z commented 1 year ago

提问时请尽可能提供如下信息:

基本信息

核心代码

# 请在此处贴上你的核心代码。
# 请尽量只保留关键部分,不要无脑贴全部代码。

# 以下为修改代码部分

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1"
os.environ['TF_KERAS'] = '1'  # 必须使用tf.keras

class data_generator(DataGenerator):
    """数据生成器
    """
    def __iter__(self, random=False):
        """单条样本格式:[CLS]篇章[SEP]答案[SEP]问题[SEP]
        """
        for is_end, (p, q, a) in self.sample(random):
            p_token_ids, _ = tokenizer.encode(p, maxlen=max_p_len + 1)
            a_token_ids, _ = tokenizer.encode(a, maxlen=max_a_len)
            q_token_ids, _ = tokenizer.encode(q, maxlen=max_q_len)
            token_ids = p_token_ids + a_token_ids[1:] + q_token_ids[1:]
            segment_ids = [0] * len(p_token_ids)
            segment_ids += [1] * (len(token_ids) - len(p_token_ids))
            yield token_ids, segment_ids

# 单机多卡
strategy = tf.distribute.MirroredStrategy(devices=["/gpu:0", "/gpu:1"])  # 建立单机多卡策略

with strategy.scope():  # 调用该策略

    bert = build_transformer_model(
        config_path,
        checkpoint_path=None,
        application='unilm',
        keep_tokens=keep_tokens,  # 只保留keep_tokens中的字,精简原字表
        return_keras_model=False
    )

    model = bert.model
    output = CrossEntropy(2)(model.inputs + model.outputs)

    model = Model(model.inputs, output)
    model.compile(optimizer=Adam(1e-5))
    model.summary()

    bert.load_weights_from_checkpoint(checkpoint_path)

if __name__ == '__main__':

    evaluator = Evaluator()
    train_generator = data_generator(train_data, batch_size)

    dataset = train_generator.to_dataset(
        types=('float32', 'float32'),
        shapes=([None], [None]),  # 配合后面的padded_batch=True,实现自动padding
        names=('Input-Token', 'Input-Segment'),
        padded_batch=True
    )  # 数据要转为tf.data.Dataset格式,names跟输入层的名字对应

    model.fit(
        dataset,
        epochs=epochs,
        steps_per_epoch=100,
        callbacks=[evaluator]
    )

输出信息

# 请在此处贴上你的调试输出

Traceback (most recent call last):
  File "/mnt/zhangfazhan/qa_extraction/qag_multi_gpu.py", line 290, in <module>
    callbacks=[evaluator]
  File "/home/jackson.zhang/anaconda3/envs/tf/lib/python3.7/site-packages/keras/engine/training.py", line 1154, in fit
    batch_size=batch_size)
  File "/home/jackson.zhang/anaconda3/envs/tf/lib/python3.7/site-packages/keras/engine/training.py", line 579, in _standardize_user_data
    exception_prefix='input')
  File "/home/jackson.zhang/anaconda3/envs/tf/lib/python3.7/site-packages/keras/engine/training_utils.py", line 99, in standardize_input_data
    data = [standardize_single_array(x) for x in data]
  File "/home/jackson.zhang/anaconda3/envs/tf/lib/python3.7/site-packages/keras/engine/training_utils.py", line 99, in <listcomp>
    data = [standardize_single_array(x) for x in data]
  File "/home/jackson.zhang/anaconda3/envs/tf/lib/python3.7/site-packages/keras/engine/training_utils.py", line 34, in standardize_single_array
    elif x.ndim == 1:
AttributeError: 'DatasetV1Adapter' object has no attribute 'ndim'

自我尝试

不管什么问题,请先尝试自行解决,“万般努力”之下仍然无法解决再来提问。此处请贴上你的努力过程。

bojone commented 1 year ago

os.environ['TF_KERAS'] = '1' # 必须使用tf.keras是放到代码开头了吗?(导入bert4keras之前)