fudan-generative-vision / hallo

Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation
https://fudan-generative-vision.github.io/hallo/
MIT License
9.32k stars 1.28k forks source link

ERROR - root - Failed to execute the training process: #199

Open torracxiaokeai opened 5 days ago

torracxiaokeai commented 5 days ago

Thank you very much for your excellent work. I am now encountering this problem while training my model in a virtual environment. When I execute the command line, an error occurs. Can anyone solve it? It seems like some modules have not been load, but i have download all the pretrain-model, how can I solve it? Thanks. 10/10/2024 19:12:15 - INFO - hallo.models.unet_3d - loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5/unet ... 10/10/2024 19:12:22 - INFO - hallo.models.unet_3d - Loaded 0.0M-parameter motion module

(hallo) root@d1ef2432db94:~/avatar_project/hallo# accelerate launch -m --config_file accelerate_config.yaml --machine_rank 0 --main_process_ip 0.0.0.0 --main_process_port 20055 --num_machines 1 --num_processes 1 scripts.train_stage1 --config ./configs/train/stage1.yaml

WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.2.2+cu118 with CUDA 1108 (you have 2.2.2+cu121) Python 3.10.14 (you have 3.10.14) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details /root/anaconda3/envs/hallo/lib/python3.10/site-packages/albumentations/init.py:13: UserWarning: A new version of Albumentations is available: 1.4.18 (you have 1.4.14). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1. check_for_updates() [2024-10-10 19:12:12,545] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-10-10 19:12:13,760] [INFO] [comm.py:652:init_distributed] cdb=None [2024-10-10 19:12:13,760] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 10/10/2024 19:12:13 - INFO - main - Distributed environment: DEEPSPEED Backend: nccl Num processes: 1 Process index: 0 Local process index: 0 Device: cuda:0

Mixed precision type: no ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'steps_per_print': inf, 'fp16': {'enabled': False}, 'bf16': {'enabled': False}}

{'scaling_factor', 'latents_mean', 'latents_std', 'force_upcast'} was not found in config. Values will be initialized to default values. The config attributes {'center_input_sample': False, 'out_channels': 4} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file. {'addition_time_embed_dim', '_landmark_net', 'time_embedding_act_fn', 'use_linear_projection', 'addition_embed_type', '_center_input_sample', 'reverse_transformer_layers_per_block', '_out_channels', 'time_embedding_type', 'transformer_layers_per_block', 'projection_class_embeddings_input_dim', 'num_attention_heads', 'class_embed_type', 'timestep_post_act', 'conv_in_kernel', 'addition_embed_type_num_heads', 'mid_block_type', 'encoder_hid_dim', 'time_embedding_dim', 'mid_block_only_cross_attention', 'dropout', 'class_embeddings_concat', 'upcast_attention', 'num_class_embeds', 'encoder_hid_dim_type', 'time_cond_proj_dim', 'dual_cross_attention', 'only_cross_attention', 'attention_type', 'resnet_time_scale_shift'} was not found in config. Values will be initialized to default values. Some weights of the model checkpoint were not used when initializing UNet2DConditionModel: ['conv_norm_out.bias, conv_norm_out.weight, conv_out.bias, conv_out.weight'] 10/10/2024 19:12:15 - INFO - hallo.models.unet_3d - loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5/unet ... The config attributes {'center_input_sample': False} were passed to UNet3DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file. {'upcast_attention', 'num_class_embeds', 'unet_use_cross_frame_attention', 'motion_module_resolutions', 'stack_enable_blocks_depth', 'resnet_time_scale_shift', 'motion_module_mid_block', 'dual_cross_attention', 'only_cross_attention', 'class_embed_type', 'use_audio_module', 'audio_attention_dim', 'motion_module_decoder_only', 'stack_enable_blocks_name', 'motion_module_kwargs', 'use_linear_projection', 'use_inflated_groupnorm', 'motion_module_type'} was not found in config. Values will be initialized to default values. 10/10/2024 19:12:22 - INFO - hallo.models.unet_3d - Loaded 0.0M-parameter motion module 10/10/2024 19:12:23 - ERROR - root - Failed to execute the training process: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 2, 1, 40) (torch.float32) key : shape=(1, 2, 1, 40) (torch.float32) value : shape=(1, 2, 1, 40) (torch.float32) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info for more info flshattF@0.0.0 is not supported because: xFormers wasn't build with CUDA support dtype=torch.float32 (supported: {torch.float16, torch.bfloat16}) cutlassF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 40

torracxiaokeai commented 5 days ago

After adjust the version of xFormer, and adjust the num_workers in train_stage1.py, the error become like this ..... It seems like there is a ONNXRuntime error ERROR - root - Failed to execute the training process: [ONNXRuntimeError]

(hallo) root@d1ef2432db94:~/avatar_project/hallo# accelerate launch -m --config_file accelerate_config.yaml --machine_rank 0 --main_process_ip 0.0.0.0 --main_process_port 20055 --num_machines 1 --num_processes 1 scripts.train_stage1 --config ./configs/train/stage1.yaml

/root/anaconda3/envs/hallo/lib/python3.10/site-packages/albumentations/init.py:13: UserWarning: A new version of Albumentations is available: 1.4.18 (you have 1.4.14). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1. check_for_updates() [2024-10-11 09:41:31,214] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-10-11 09:41:32,582] [INFO] [comm.py:652:init_distributed] cdb=None [2024-10-11 09:41:32,582] [INFO] [comm.py:683:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 10/11/2024 09:41:32 - INFO - main - Distributed environment: DEEPSPEED Backend: nccl Num processes: 1 Process index: 0 Local process index: 0 Device: cuda:0

Mixed precision type: no ds_config: {'train_batch_size': 'auto', 'train_micro_batch_size_per_gpu': 'auto', 'gradient_accumulation_steps': 1, 'zero_optimization': {'stage': 2, 'offload_optimizer': {'device': 'none', 'nvme_path': None}, 'offload_param': {'device': 'none', 'nvme_path': None}, 'stage3_gather_16bit_weights_on_model_save': False}, 'steps_per_print': inf, 'fp16': {'enabled': False}, 'bf16': {'enabled': False}}

{'force_upcast', 'scaling_factor', 'latents_mean', 'latents_std'} was not found in config. Values will be initialized to default values. The config attributes {'center_input_sample': False, 'out_channels': 4} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file. {'timestep_post_act', 'encoder_hid_dim', 'addition_embed_type', 'resnet_time_scale_shift', 'upcast_attention', '_landmark_net', 'mid_block_type', 'projection_class_embeddings_input_dim', 'dropout', 'addition_embed_type_num_heads', 'class_embeddings_concat', '_out_channels', 'addition_time_embed_dim', 'transformer_layers_per_block', 'dual_cross_attention', 'time_embedding_dim', 'time_cond_proj_dim', 'reverse_transformer_layers_per_block', 'attention_type', 'only_cross_attention', 'time_embedding_type', 'class_embed_type', 'mid_block_only_cross_attention', 'conv_in_kernel', 'num_attention_heads', 'num_class_embeds', 'use_linear_projection', '_center_input_sample', 'time_embedding_act_fn', 'encoder_hid_dim_type'} was not found in config. Values will be initialized to default values. Some weights of the model checkpoint were not used when initializing UNet2DConditionModel: ['conv_norm_out.bias, conv_norm_out.weight, conv_out.bias, conv_out.weight'] 10/11/2024 09:41:34 - INFO - hallo.models.unet_3d - loaded temporal unet's pretrained weights from pretrained_models/stable-diffusion-v1-5/unet ... The config attributes {'center_input_sample': False} were passed to UNet3DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file. {'unet_use_cross_frame_attention', 'use_audio_module', 'use_inflated_groupnorm', 'motion_module_kwargs', 'only_cross_attention', 'stack_enable_blocks_name', 'audio_attention_dim', 'motion_module_mid_block', 'stack_enable_blocks_depth', 'resnet_time_scale_shift', 'motion_module_decoder_only', 'motion_module_resolutions', 'upcast_attention', 'motion_module_type', 'num_class_embeds', 'use_linear_projection', 'class_embed_type', 'dual_cross_attention'} was not found in config. Values will be initialized to default values. 10/11/2024 09:41:41 - INFO - hallo.models.unet_3d - Loaded 0.0M-parameter motion module [2024-10-11 09:41:42,302] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.15.2, git-hash=unknown, git-branch=unknown [2024-10-11 09:41:42,303] [INFO] [config.py:733:init] Config mesh_device None world_size = 1 [2024-10-11 09:41:42,475] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False [2024-10-11 09:41:42,480] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer [2024-10-11 09:41:42,480] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer [2024-10-11 09:41:42,759] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = AdamW [2024-10-11 09:41:42,759] [INFO] [utils.py:59:is_zero_supported_optimizer] Checking ZeRO support for optimizer=AdamW type=<class 'torch.optim.adamw.AdamW'> [2024-10-11 09:41:42,759] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.float32 ZeRO stage 2 optimizer [2024-10-11 09:41:42,759] [INFO] [stage_1_and_2.py:149:init] Reduce bucket size 500000000 [2024-10-11 09:41:42,759] [INFO] [stage_1_and_2.py:150:init] Allgather bucket size 500000000 [2024-10-11 09:41:42,759] [INFO] [stage_1_and_2.py:151:init] CPU Offload: False [2024-10-11 09:41:42,759] [INFO] [stage_1_and_2.py:152:init] Round robin gradient partitioning: False [2024-10-11 09:41:46,538] [INFO] [utils.py:781:see_memory_usage] Before initializing optimizer states [2024-10-11 09:41:46,539] [INFO] [utils.py:782:see_memory_usage] MA 9.78 GB Max_MA 12.98 GB CA 13.04 GB Max_CA 13 GB [2024-10-11 09:41:46,539] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 25.53 GB, percent = 10.1% [2024-10-11 09:41:46,741] [INFO] [utils.py:781:see_memory_usage] After initializing optimizer states [2024-10-11 09:41:46,742] [INFO] [utils.py:782:see_memory_usage] MA 9.78 GB Max_MA 16.19 GB CA 19.45 GB Max_CA 19 GB [2024-10-11 09:41:46,743] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 25.52 GB, percent = 10.1% [2024-10-11 09:41:46,743] [INFO] [stage_1_and_2.py:544:init] optimizer state initialized [2024-10-11 09:41:46,937] [INFO] [utils.py:781:see_memory_usage] After initializing ZeRO optimizer [2024-10-11 09:41:46,938] [INFO] [utils.py:782:see_memory_usage] MA 9.78 GB Max_MA 9.78 GB CA 19.45 GB Max_CA 19 GB [2024-10-11 09:41:46,938] [INFO] [utils.py:789:see_memory_usage] CPU Virtual Memory: used = 25.52 GB, percent = 10.1% [2024-10-11 09:41:46,978] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = DeepSpeedZeroOptimizer [2024-10-11 09:41:46,979] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using configured LR scheduler = None [2024-10-11 09:41:46,979] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = None [2024-10-11 09:41:46,979] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[1e-05], mom=[(0.9, 0.999)] [2024-10-11 09:41:46,985] [INFO] [config.py:999:print] DeepSpeedEngine configuration: [2024-10-11 09:41:46,985] [INFO] [config.py:1003:print] activation_checkpointing_config { "partition_activations": false, "contiguous_memory_optimization": false, "cpu_checkpointing": false, "number_checkpoints": null, "synchronize_checkpoint_boundary": false, "profile": false } [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True, 'use_gds': False} [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] amp_enabled .................. False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] amp_params ................... False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] autotuning_config ............ { "enabled": false, "start_step": null, "end_step": null, "metric_path": null, "arg_mappings": null, "metric": "throughput", "model_info": null, "results_dir": "autotuning_results", "exps_dir": "autotuning_exps", "overwrite": true, "fast": true, "start_profile_step": 3, "end_profile_step": 5, "tuner_type": "gridsearch", "tuner_early_stopping": 5, "tuner_num_trials": 50, "model_info_path": null, "mp_size": 1, "max_train_batch_size": null, "min_train_batch_size": 1, "max_train_micro_batch_size_per_gpu": 1.024000e+03, "min_train_micro_batch_size_per_gpu": 1, "num_tuning_micro_batch_sizes": 3 } [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] bfloat16_enabled ............. False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] bfloat16_immediate_grad_update False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] checkpoint_parallel_write_pipeline False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] checkpoint_tag_validation_enabled True [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] checkpoint_tag_validation_fail False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f5e581f5f60> [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] communication_data_type ...... None [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] curriculum_enabled_legacy .... False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] curriculum_params_legacy ..... False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] data_efficiency_enabled ...... False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] dataloader_drop_last ......... False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] disable_allgather ............ False [2024-10-11 09:41:46,986] [INFO] [config.py:1003:print] dump_state ................... False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] dynamic_loss_scale_args ...... None [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_enabled ........... False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_gas_boundary_resolution 1 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_layer_name ........ bert.encoder.layer [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_layer_num ......... 0 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_max_iter .......... 100 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_stability ......... 1e-06 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_tol ............... 0.01 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] eigenvalue_verbose ........... False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] elasticity_enabled ........... False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] flops_profiler_config ........ { "enabled": false, "recompute_fwd_factor": 0.0, "profile_step": 1, "module_depth": -1, "top_modules": 1, "detailed": true, "output_file": null } [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] fp16_auto_cast ............... None [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] fp16_enabled ................. False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] fp16_master_weights_and_gradients False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] global_rank .................. 0 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] grad_accum_dtype ............. None [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] gradient_accumulation_steps .. 1 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] gradient_clipping ............ 0.0 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] gradient_predivide_factor .... 1.0 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] graph_harvesting ............. False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] initial_dynamic_scale ........ 65536 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] load_universal_checkpoint .... False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] loss_scale ................... 0 [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] memory_breakdown ............. False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] mics_hierarchial_params_gather False [2024-10-11 09:41:46,987] [INFO] [config.py:1003:print] mics_shard_size .............. -1 [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') comet=CometConfig(enabled=False, samples_log_interval=100, project=None, workspace=None, api_key=None, experiment_name=None, experiment_key=None, online=None, mode=None) wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] nebula_config ................ { "enabled": false, "persistent_storage_path": null, "persistent_time_interval": 100, "num_of_version_in_retention": 2, "enable_nebula_load": true, "load_path": null } [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] optimizer_legacy_fusion ...... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] optimizer_name ............... None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] optimizer_params ............. None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True} [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] pld_enabled .................. False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] pld_params ................... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] prescale_gradients ........... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] scheduler_name ............... None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] scheduler_params ............. None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] seq_parallel_communication_data_type torch.float32 [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] sparse_attention ............. None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] sparse_gradients_enabled ..... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] steps_per_print .............. inf [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] timers_config ................ enabled=True synchronized=True [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] train_batch_size ............. 1 [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] train_micro_batch_size_per_gpu 1 [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] use_data_before_expertparallel False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] use_node_local_storage ....... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] wall_clock_breakdown ......... False [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] weight_quantization_config ... None [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] world_size ................... 1 [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] zero_allow_untested_optimizer True [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] zero_config .................. stage=2 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100000000, max_in_cpu=1000000000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline_read=False, pipeline_write=False, fast_init=False, ratio=1.0) sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False use_all_reduce_for_fetch_params=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] zero_enabled ................. True [2024-10-11 09:41:46,988] [INFO] [config.py:1003:print] zero_force_ds_cpu_optimizer .. True [2024-10-11 09:41:46,989] [INFO] [config.py:1003:print] zero_optimization_stage ...... 2 [2024-10-11 09:41:46,989] [INFO] [config.py:989:print_user_config] json = { "train_batch_size": 1, "train_micro_batch_size_per_gpu": 1, "gradient_accumulation_steps": 1, "zero_optimization": { "stage": 2, "offload_optimizer": { "device": "none", "nvme_path": null }, "offload_param": { "device": "none", "nvme_path": null }, "stage3_gather_16bit_weights_on_model_save": false }, "steps_per_print": inf, "fp16": { "enabled": false }, "bf16": { "enabled": false }, "zero_allow_untested_optimizer": true } 10/11/2024 09:41:47 - INFO - main - save config to ./exp_output/stage1 10/11/2024 09:41:47 - INFO - main - Running training 10/11/2024 09:41:47 - INFO - main - Num examples = 319 10/11/2024 09:41:47 - INFO - main - Num Epochs = 95 10/11/2024 09:41:47 - INFO - main - Instantaneous batch size per device = 1 10/11/2024 09:41:47 - INFO - main - Total train batch size (w. parallel, distributed & accumulation) = 1 10/11/2024 09:41:47 - INFO - main - Gradient Accumulation steps = 1 10/11/2024 09:41:47 - INFO - main - Total optimization steps = 30000 10/11/2024 09:41:47 - INFO - main - Loading checkpoint from ./exp_output/stage1/checkpoints Could not find checkpoint under ./exp_output/stage1/checkpoints, start training from scratch Steps: 0%| | 0/30000 [00:00<?, ?it/s][2024-10-11 09:41:49,664] [INFO] [loss_scaler.py:183:update_scale] [deepspeed] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648 Steps: 0%| | 1/30000 [00:02<21:31:52, 2.58s/it]10/11/2024 09:41:49 - INFO - main - Running validation... Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'tunable_op_max_tuning_duration_ms': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv_use_max_workspace': '1', 'use_tf32': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'arena_extend_strategy': 'kNextPowerOfTwo', 'user_compute_stream': '0', 'has_user_compute_stream': '0', 'use_ep_level_unified_stream': '0', 'device_id': '0'}} find model: ./pretrained_models/face_analysis/models/1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'tunable_op_max_tuning_duration_ms': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv_use_max_workspace': '1', 'use_tf32': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'arena_extend_strategy': 'kNextPowerOfTwo', 'user_compute_stream': '0', 'has_user_compute_stream': '0', 'use_ep_level_unified_stream': '0', 'device_id': '0'}} find model: ./pretrained_models/face_analysis/models/2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'tunable_op_max_tuning_duration_ms': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv_use_max_workspace': '1', 'use_tf32': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'arena_extend_strategy': 'kNextPowerOfTwo', 'user_compute_stream': '0', 'has_user_compute_stream': '0', 'use_ep_level_unified_stream': '0', 'device_id': '0'}} find model: ./pretrained_models/face_analysis/models/genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'tunable_op_max_tuning_duration_ms': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv_use_max_workspace': '1', 'use_tf32': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'arena_extend_strategy': 'kNextPowerOfTwo', 'user_compute_stream': '0', 'has_user_compute_stream': '0', 'use_ep_level_unified_stream': '0', 'device_id': '0'}} find model: ./pretrained_models/face_analysis/models/glintr100.onnx recognition ['None', 3, 112, 112] 127.5 127.5 Applied providers: ['CUDAExecutionProvider', 'CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}, 'CUDAExecutionProvider': {'prefer_nhwc': '0', 'enable_skip_layer_norm_strict_mode': '0', 'tunable_op_enable': '0', 'enable_cuda_graph': '0', 'tunable_op_max_tuning_duration_ms': '0', 'tunable_op_tuning_enable': '0', 'cudnn_conv_use_max_workspace': '1', 'use_tf32': '1', 'cudnn_conv1d_pad_to_nc1d': '0', 'do_copy_in_default_stream': '1', 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'gpu_external_empty_cache': '0', 'gpu_external_free': '0', 'gpu_external_alloc': '0', 'gpu_mem_limit': '18446744073709551615', 'arena_extend_strategy': 'kNextPowerOfTwo', 'user_compute_stream': '0', 'has_user_compute_stream': '0', 'use_ep_level_unified_stream': '0', 'device_id': '0'}} find model: ./pretrained_models/face_analysis/models/scrfd_10g_bnkps.onnx detection [1, 3, '?', '?'] 127.5 128.0 set det-size: (640, 640) 10/11/2024 09:41:59 - ERROR - root - Failed to execute the training process: [ONNXRuntimeError] : 1 : FAIL : /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:123 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudnnStatus_t; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] /onnxruntime_src/onnxruntime/core/providers/cuda/cuda_call.cc:116 std::conditional_t<THRW, void, onnxruntime::common::Status> onnxruntime::CudaCall(ERRTYPE, const char, const char, ERRTYPE, const char, const char, int) [with ERRTYPE = cudnnStatus_t; bool THRW = true; std::conditional_t<THRW, void, onnxruntime::common::Status> = void] CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=d1ef2432db94 ; file=/onnxruntime_src/onnxruntime/core/providers/cuda/cuda_stream_handle.cc ; line=76 ; expr=cudnnCreate(&cudnnhandle);

Steps: 0%| | 1/30000 [00:12<102:16:46, 12.27s/it] [rank0]:[W1011 09:42:00.132046536 ProcessGroupNCCL.cpp:1168] Warning: WARNING: process group has NOT been destroyed before we destruct ProcessGroupNCCL. On normal program exit, the application should call destroy_process_group to ensure that any pending NCCL operations have finished in this process. In rare cases this process can exit before this point and block the progress of another member of the process group. This constraint has always been present, but this warning has only been added since PyTorch 2.4 (function operator())

torracxiaokeai commented 5 days ago

After I adjust cudnn version match for ONNXruntime1.18.0 and cuda12.1, the train can run, but quickly out of memory in the first steps. The device I use is 3090 with 24G memory, and I have adjust the --num_process to 1, and the batchsize in stage1.yaml also adjust to 1. How can I make the train not to out of memory?

torracxiaokeai commented 4 days ago

After I choose to set --num_process to 4 since I have multiple GPU, there is a new error ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error I have found this Error from the Internet long time, but failed to address it.

10/11/2024 19:33:15 - ERROR - root - Failed to execute the training process: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: Error while creating shared memory segment /dev/shm/nccl-6a00PE (size 9637888) 10/11/2024 19:33:15 - ERROR - root - Failed to execute the training process: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: Error while creating shared memory segment /dev/shm/nccl-E2WOOK (size 9637888) 10/11/2024 19:33:15 - ERROR - root - Failed to execute the training process: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: Error while creating shared memory segment /dev/shm/nccl-fjmvuH (size 9637888) 10/11/2024 19:33:15 - ERROR - root - Failed to execute the training process: NCCL error in: ../torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp:1691, unhandled system error (run with NCCL_DEBUG=INFO for details), NCCL version 2.19.3 ncclSystemError: System call (e.g. socket, malloc) or external library call failed or device error. Last error: Error while creating shared memory segment /dev/shm/nccl-dPm0fA (size 9637888)