PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.29k stars 5.62k forks source link

模型训练报错paddle.fluid.core_noavx has no attribute 'c_broadcast' #50949

Open Macxy2018 opened 1 year ago

Macxy2018 commented 1 year ago

请提出你的问题 Please ask your question

环境:硬件环境为ARMv8架构cpu机器(无GPU),使用容器启动,ubuntu18.04为基础镜像,python为3.7.13,容器中创建虚拟环境,并按要求安装pip依赖,安装的是paddle 2.3版。

现象:编译过程无报错问题,安装无问题,单核训练无问题,使用paddlenlp中的分布式训练命令报错。

python分布式训练命令:python -m paddle.distributed.launch --nproc_per_node=2 --backend='gloo' xxxx.py

报错信息:paddle.fluid.core_noavx has no attribute 'c_broadcast'

当前paddle编译过程命令如下: git clone https://github.com/PaddlePaddle/Paddle.git

cd Paddle

git checkout release/2.3

mkdir build && cd build

ulimit -n 4096

export PADDLE_VERSION=2.3.0

cmake .. -DPY_VERSION=3.7.13 -DPYTHON_EXECUTABLE=which python3 -DWITH_ARM=ON -DWITH_TESTING=OFF -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON -DWITH_XBYAK=OFF -DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_LIBRARY=$(python3 -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") -DWITH_GLOO=ON

make TARGET=ARMV8 -j$(nproc)

paddle-bot[bot] commented 1 year ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

zoooo0820 commented 1 year ago

你好,请问是这个任务吗,使用更新一些的paddle版本是否仍然会报错?

Macxy2018 commented 1 year ago

你好,请问是这个任务吗,使用更新一些的paddle版本是否仍然会报错?

是这个任务:https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/information_extraction/text, 目前2.4新版本的paddle还在编译中,现在使用的是armv8编译的cpu的2.3版,训练启动的时候命令为python3 -m paddle.distributed.launch --nproc_per_node=8 --backend='gloo' finetune.py

Macxy2018 commented 1 year ago

你好,请问是这个任务吗,使用更新一些的paddle版本是否仍然会报错?

是这个任务:https://github.com/PaddlePaddle/PaddleNLP/tree/develop/applications/information_extraction/text, 目前2.4新版本的paddle还在编译中,现在使用的是armv8编译的cpu的2.3版,训练启动的时候命令为python3 -m paddle.distributed.launch --nproc_per_node=8 --backend='gloo' finetune.py

在编译2.3版本的时候增加了--DWITH_DISTRIBUTE=ON,整体cmake命令如下: cmake .. -DPY_VERSION=3.7.13 -DPYTHON_EXECUTABLE=which python3 -DWITH_ARM=ON -DWITH_TESTING=OFF -DCMAKE_BUILD_TYPE=Release -DON_INFER=ON -DWITH_XBYAK=OFF -DPYTHON_INCLUDE_DIR=$(python3 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_LIBRARY=$(python3 -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") -DWITH_GLOO=ON

现在运行起来后报错,报错信息如下: ERROR 2023-02-27 16:07:13,704 launch_utils.py:642] ABORT!!! Out of all 8 trainers, the trainer process with rank=[1, 6, 7] was aborted. Please check its log.

zoooo0820 commented 1 year ago

@Macxy2018 辛苦查看下worker对应id1,6,7的日志文件,看看具体报错原因呢

Macxy2018 commented 1 year ago

整体上么有报错提示信息……,worker6和7的都是一样的没有报错提示,worker1的日志如下,然后就停了: /venv/paddle/lib/python3.7/site-packages/_distutils_hack/init.py:33: UserWarning: Setuptools is replacing distutils. warnings.warn("Setuptools is replacing distutils.") [2023-02-27 10:48:44,788] [ WARNING] - evaluation_strategy reset to IntervalStrategy.STEPS for do_eval is True. you can also set evaluation_strategy='epoch'. [2023-02-27 10:48:44,789] [ INFO] - The default value for the training argument --report_to will change in v5 (from all installed integrations to none). In v5, you will need to use --report_to all to get the same behavior as now. You should start updating your code and make this info disappear :-). [2023-02-27 10:48:44,789] [ INFO] - ============================================================ [2023-02-27 10:48:44,789] [ INFO] - Model Configuration Arguments  [2023-02-27 10:48:44,790] [ INFO] - paddle commit id :a5875319fe3bdd359895f1f6a11faf21df886f88 [2023-02-27 10:48:44,790] [ INFO] - export_model_dir :./checkpoint_base_1/model_best [2023-02-27 10:48:44,790] [ INFO] - model_name_or_path :uie-base [2023-02-27 10:48:44,790] [ INFO] - multilingual :False [2023-02-27 10:48:44,790] [ INFO] -  [2023-02-27 10:48:44,790] [ INFO] - ============================================================ [2023-02-27 10:48:44,790] [ INFO] - Data Configuration Arguments  [2023-02-27 10:48:44,791] [ INFO] - paddle commit id :a5875319fe3bdd359895f1f6a11faf21df886f88 [2023-02-27 10:48:44,791] [ INFO] - dev_path :data/dev.txt [2023-02-27 10:48:44,791] [ INFO] - max_seq_length :512 [2023-02-27 10:48:44,791] [ INFO] - train_path :data/train.txt [2023-02-27 10:48:44,791] [ INFO] -  [2023-02-27 10:48:46,324] [ WARNING] - Process rank: 1, device: cpu, world_size: 8, distributed training: True, 16-bits training: False [2023-02-27 10:48:46,324] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.tokenizer.ErnieTokenizer'> to load 'uie-base'. [2023-02-27 10:48:46,325] [ INFO] - Already cached /root/.paddlenlp/models/uie-base/ernie_3.0_base_zh_vocab.txt [2023-02-27 10:48:46,360] [ INFO] - tokenizer config file saved in /root/.paddlenlp/models/uie-base/tokenizer_config.json [2023-02-27 10:48:46,360] [ INFO] - Special tokens file saved in /root/.paddlenlp/models/uie-base/special_tokens_map.json [2023-02-27 10:48:46,362] [ INFO] - Model config ErnieConfig { "attention_probs_dropout_prob": 0.1, "enable_recompute": false, "fuse": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 2048, "model_type": "ernie", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "paddlenlp_version": null, "pool_act": "tanh", "task_id": 0, "task_type_vocab_size": 3, "type_vocab_size": 4, "use_task_id": true, "vocab_size": 40000 }  [2023-02-27 10:48:57,758] [ INFO] - All model checkpoint weights were used when initializing UIE.  [2023-02-27 10:48:57,759] [ INFO] - All the weights of UIE were initialized from the model checkpoint at uie-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use UIE for predictions without further training. [2023-02-27 10:48:57,818] [ INFO] - ============================================================ [2023-02-27 10:48:57,819] [ INFO] - Training Configuration Arguments  [2023-02-27 10:48:57,819] [ INFO] - paddle commit id :a5875319fe3bdd359895f1f6a11faf21df886f88 [2023-02-27 10:48:57,819] [ INFO] - _no_sync_in_gradient_accumulation:True [2023-02-27 10:48:57,819] [ INFO] - activation_quantize_type :None [2023-02-27 10:48:57,820] [ INFO] - adam_beta1 :0.9 [2023-02-27 10:48:57,820] [ INFO] - adam_beta2 :0.999 [2023-02-27 10:48:57,820] [ INFO] - adam_epsilon :1e-08 [2023-02-27 10:48:57,820] [ INFO] - algo_list :None [2023-02-27 10:48:57,820] [ INFO] - batch_num_list :None [2023-02-27 10:48:57,820] [ INFO] - batch_size_list :None [2023-02-27 10:48:57,820] [ INFO] - bf16 :False [2023-02-27 10:48:57,821] [ INFO] - bf16_full_eval :False [2023-02-27 10:48:57,821] [ INFO] - bias_correction :False [2023-02-27 10:48:57,821] [ INFO] - current_device :cpu [2023-02-27 10:48:57,821] [ INFO] - dataloader_drop_last :False [2023-02-27 10:48:57,821] [ INFO] - dataloader_num_workers :0 [2023-02-27 10:48:57,821] [ INFO] - device :cpu [2023-02-27 10:48:57,821] [ INFO] - disable_tqdm :True [2023-02-27 10:48:57,822] [ INFO] - do_compress :False [2023-02-27 10:48:57,822] [ INFO] - do_eval :True [2023-02-27 10:48:57,822] [ INFO] - do_export :True [2023-02-27 10:48:57,822] [ INFO] - do_predict :False [2023-02-27 10:48:57,822] [ INFO] - do_train :True [2023-02-27 10:48:57,822] [ INFO] - eval_batch_size :8 [2023-02-27 10:48:57,822] [ INFO] - eval_steps :100 [2023-02-27 10:48:57,822] [ INFO] - evaluation_strategy :IntervalStrategy.STEPS [2023-02-27 10:48:57,823] [ INFO] - flatten_param_grads :False [2023-02-27 10:48:57,823] [ INFO] - fp16 :False [2023-02-27 10:48:57,823] [ INFO] - fp16_full_eval :False [2023-02-27 10:48:57,823] [ INFO] - fp16_opt_level :O1 [2023-02-27 10:48:57,823] [ INFO] - gradient_accumulation_steps :1 [2023-02-27 10:48:57,823] [ INFO] - greater_is_better :True [2023-02-27 10:48:57,823] [ INFO] - ignore_data_skip :False [2023-02-27 10:48:57,824] [ INFO] - input_dtype :int64 [2023-02-27 10:48:57,824] [ INFO] - input_infer_model_path :None [2023-02-27 10:48:57,824] [ INFO] - label_names :['start_positions', 'end_positions'] [2023-02-27 10:48:57,824] [ INFO] - learning_rate :1e-05 [2023-02-27 10:48:57,824] [ INFO] - load_best_model_at_end :True [2023-02-27 10:48:57,824] [ INFO] - local_process_index :1 [2023-02-27 10:48:57,824] [ INFO] - local_rank :1 [2023-02-27 10:48:57,825] [ INFO] - log_level :-1 [2023-02-27 10:48:57,825] [ INFO] - log_level_replica :-1 [2023-02-27 10:48:57,825] [ INFO] - log_on_each_node :True [2023-02-27 10:48:57,825] [ INFO] - logging_dir :./checkpoint_base_1/model_best/runs/Feb27_10-48-44_b11d0c49d963 [2023-02-27 10:48:57,825] [ INFO] - logging_first_step :False [2023-02-27 10:48:57,825] [ INFO] - logging_steps :10 [2023-02-27 10:48:57,825] [ INFO] - logging_strategy :IntervalStrategy.STEPS [2023-02-27 10:48:57,825] [ INFO] - lr_scheduler_type :SchedulerType.LINEAR [2023-02-27 10:48:57,826] [ INFO] - max_grad_norm :1.0 [2023-02-27 10:48:57,826] [ INFO] - max_steps :-1 [2023-02-27 10:48:57,826] [ INFO] - metric_for_best_model :eval_f1 [2023-02-27 10:48:57,826] [ INFO] - minimum_eval_times :None [2023-02-27 10:48:57,826] [ INFO] - moving_rate :0.9 [2023-02-27 10:48:57,826] [ INFO] - no_cuda :False [2023-02-27 10:48:57,827] [ INFO] - num_train_epochs :100.0 [2023-02-27 10:48:57,827] [ INFO] - onnx_format :True [2023-02-27 10:48:57,827] [ INFO] - optim :OptimizerNames.ADAMW [2023-02-27 10:48:57,827] [ INFO] - output_dir :./checkpoint_base_1/model_best [2023-02-27 10:48:57,827] [ INFO] - overwrite_output_dir :True [2023-02-27 10:48:57,827] [ INFO] - past_index :-1 [2023-02-27 10:48:57,827] [ INFO] - per_device_eval_batch_size :8 [2023-02-27 10:48:57,828] [ INFO] - per_device_train_batch_size :8 [2023-02-27 10:48:57,828] [ INFO] - prediction_loss_only :False [2023-02-27 10:48:57,828] [ INFO] - process_index :1 [2023-02-27 10:48:57,828] [ INFO] - prune_embeddings :False [2023-02-27 10:48:57,828] [ INFO] - recompute :False [2023-02-27 10:48:57,828] [ INFO] - remove_unused_columns :True [2023-02-27 10:48:57,828] [ INFO] - report_to :['visualdl'] [2023-02-27 10:48:57,829] [ INFO] - resume_from_checkpoint :None [2023-02-27 10:48:57,829] [ INFO] - round_type :round [2023-02-27 10:48:57,829] [ INFO] - run_name :./checkpoint_base_1/model_best [2023-02-27 10:48:57,829] [ INFO] - save_on_each_node :False [2023-02-27 10:48:57,829] [ INFO] - save_steps :100 [2023-02-27 10:48:57,829] [ INFO] - save_strategy :IntervalStrategy.STEPS [2023-02-27 10:48:57,829] [ INFO] - save_total_limit :None [2023-02-27 10:48:57,830] [ INFO] - scale_loss :32768 [2023-02-27 10:48:57,830] [ INFO] - seed :1000 [2023-02-27 10:48:57,830] [ INFO] - sharding :[] [2023-02-27 10:48:57,830] [ INFO] - sharding_degree :-1 [2023-02-27 10:48:57,830] [ INFO] - should_log :False [2023-02-27 10:48:57,830] [ INFO] - should_save :False [2023-02-27 10:48:57,830] [ INFO] - skip_memory_metrics :True [2023-02-27 10:48:57,831] [ INFO] - strategy :dynabert+ptq [2023-02-27 10:48:57,831] [ INFO] - train_batch_size :8 [2023-02-27 10:48:57,831] [ INFO] - use_pact :True [2023-02-27 10:48:57,831] [ INFO] - warmup_ratio :0.1 [2023-02-27 10:48:57,831] [ INFO] - warmup_steps :0 [2023-02-27 10:48:57,831] [ INFO] - weight_decay :0.0 [2023-02-27 10:48:57,831] [ INFO] - weight_quantize_type :channel_wise_abs_max [2023-02-27 10:48:57,832] [ INFO] - width_mult_list :None [2023-02-27 10:48:57,832] [ INFO] - world_size :8 [2023-02-27 10:48:57,832] [ INFO] -  [2023-02-27 10:49:00,390] [ INFO] - Running training  [2023-02-27 10:49:00,390] [ INFO] - Num examples = 570 [2023-02-27 10:49:00,390] [ INFO] - Num Epochs = 100 [2023-02-27 10:49:00,390] [ INFO] - Instantaneous batch size per device = 8 [2023-02-27 10:49:00,390] [ INFO] - Total train batch size (w. parallel, distributed & accumulation) = 64 [2023-02-27 10:49:00,390] [ INFO] - Gradient Accumulation steps = 1 [2023-02-27 10:49:00,391] [ INFO] - Total optimization steps = 900.0 [2023-02-27 10:49:00,391] [ INFO] - Total num train samples = 57000.0 [2023-02-27 10:49:00,395] [ INFO] - Number of trainable parameters = 117946370

zoooo0820 commented 1 year ago

从这里的信息暂时看不出问题所在,请问使用2.4版本编包还会报错吗

Macxy2018 commented 1 year ago

从这里的信息暂时看不出问题所在,请问使用2.4版本编包还会报错吗

2.4版的编译后,加载了paddle会报了这个错误: (paddle) root@72ce093614d5:~/Setups/PaddleNLP/applications/information_extraction/text# python3 Python 3.7.13 (default, Feb 24 2023, 16:21:25) [GCC 8.4.0] on linux Type "help", "copyright", "credits" or "license" for more information.

import paddle Error: Can not import paddle core while this file exists: /venv/paddle/lib/python3.7/site-packages/paddle/fluid/libpaddle.so Traceback (most recent call last): File "/venv/paddle/lib/python3.7/site-packages/paddle/fluid/core.py", line 274, in from . import libpaddle ImportError: /venv/paddle/lib/python3.7/site-packages/paddle/fluid/libpaddle.so: undefined symbol: shm_unlink

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "", line 1, in File "/venv/paddle/lib/python3.7/site-packages/paddle/init.py", line 25, in from .framework import monkey_patch_variable File "/venv/paddle/lib/python3.7/site-packages/paddle/framework/init.py", line 17, in from . import random # noqa: F401 File "/venv/paddle/lib/python3.7/site-packages/paddle/framework/random.py", line 16, in import paddle.fluid as fluid File "/venv/paddle/lib/python3.7/site-packages/paddle/fluid/init.py", line 36, in from . import framework File "/venv/paddle/lib/python3.7/site-packages/paddle/fluid/framework.py", line 37, in from . import core File "/venv/paddle/lib/python3.7/site-packages/paddle/fluid/core.py", line 333, in if not avx_supported() and libpaddle.is_compiled_with_avx(): NameError: name 'libpaddle' is not defined