Yuliang-Liu / Monkey

【CVPR 2024 Highlight】Monkey (LMM): Image Resolution and Text Label Are Important Things for Large Multi-modal Models
MIT License
1.82k stars 128 forks source link

Mini-Monkey--Out of Memory--24GB 3090 #128

Closed TongkunGuan closed 2 months ago

TongkunGuan commented 2 months ago

Thank you for your excellent work on MultimodalOCR!

When I run the following command: GPUS=2 BATCH_SIZE=8 sh shell/minimonkey/minimonkey_finetune_full.sh

I meet the following issue: `

08/21/2024 09:40:11 - INFO - main - Using flash_attention_2 for InternLM [INFO|modeling_utils.py:3473] 2024-08-21 09:40:11,512 >> loading weights file /home/zengzhilin/MM/weight/model.safetensors [INFO|modeling_utils.py:1426] 2024-08-21 09:40:11,537 >> Instantiating MiniMonkeyChatModel model under default dtype torch.bfloat16. [INFO|modeling_utils.py:3582] 2024-08-21 09:40:11,538 >> Detected DeepSpeed ZeRO-3: activating zero.init() for this model [INFO|configuration_utils.py:826] 2024-08-21 09:40:11,550 >> Generate config GenerationConfig {}

[INFO|configuration_utils.py:826] 2024-08-21 09:40:12,172 >> Generate config GenerationConfig { "bos_token_id": 1, "eos_token_id": 2, "pad_token_id": 2, "top_p": null }

[2024-08-21 09:40:12,397] [INFO] [partition_parameters.py:343:exit] finished initializing model - num_params = 517, num_elems = 2.21B [INFO|modeling_utils.py:4350] 2024-08-21 09:40:14,289 >> All model checkpoint weights were used when initializing MiniMonkeyChatModel.

[INFO|modeling_utils.py:4358] 2024-08-21 09:40:14,289 >> All the weights of MiniMonkeyChatModel were initialized from the model checkpoint at /home/zengzhilin/MM/weight/. If your task is similar to the task the model of the checkpoint was trained on, you can already use MiniMonkeyChatModel for predictions without further training. [INFO|configuration_utils.py:779] 2024-08-21 09:40:14,294 >> loading configuration file /home/zengzhilin/MM/weight/generation_config.json [INFO|configuration_utils.py:826] 2024-08-21 09:40:14,294 >> Generate config GenerationConfig {}

08/21/2024 09:40:14 - INFO - main - Finished 08/21/2024 09:40:14 - INFO - main - model.config.force_image_size: 448 08/21/2024 09:40:14 - INFO - main - data_args.force_image_size: 448 08/21/2024 09:40:14 - INFO - main - model.config.vision_config.image_size: 448 08/21/2024 09:40:14 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:14 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:14 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:14 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:14 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:17 - INFO - main - Add dataset: llava_instruct_150k_zh with length: 157712 08/21/2024 09:40:17 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:17 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:17 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:17 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:17 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:24 - INFO - main - Add dataset: dvqa_train_200k with length: 200000 08/21/2024 09:40:24 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:24 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:24 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:24 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:24 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:25 - INFO - main - Add dataset: chartqa_train_18k with length: 18317 08/21/2024 09:40:25 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:25 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:25 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:25 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:25 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:26 - INFO - main - Add dataset: ai2d_train_12k with length: 12413 08/21/2024 09:40:26 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:26 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:26 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:26 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:26 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:30 - INFO - main - Add dataset: docvqa_train_10k with length: 10211 08/21/2024 09:40:30 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:30 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:30 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:30 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:30 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:31 - INFO - main - Add dataset: geoqa+ with length: 72318 08/21/2024 09:40:31 - INFO - main - [Dataset] num_image_token: 256 08/21/2024 09:40:31 - INFO - main - [Dataset] dynamic_image_size: True 08/21/2024 09:40:31 - INFO - main - [Dataset] use_thumbnail: True 08/21/2024 09:40:31 - INFO - main - [Dataset] min_dynamic_patch: 1, max_dynamic_patch: 8 08/21/2024 09:40:31 - INFO - main - Formatting inputs...Skip in lazy mode 08/21/2024 09:40:33 - INFO - main - Add dataset: synthdog_en with length: 29765 08/21/2024 09:40:33 - INFO - main - language_model.model.tok_embeddings.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.0.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.1.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.2.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.3.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.4.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.5.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.6.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.7.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.8.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.9.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.10.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.11.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.12.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.13.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.14.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.15.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.16.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.17.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.18.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.19.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.20.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.21.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.22.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.attention.wqkv.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.attention.wo.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.feed_forward.w1.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.feed_forward.w3.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.feed_forward.w2.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.attention_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.layers.23.ffn_norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.model.norm.weight 08/21/2024 09:40:33 - INFO - main - language_model.output.weight [INFO|trainer.py:571] 2024-08-21 09:40:33,238 >> Using auto half precision backend [2024-08-21 09:40:33,450] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.13.5, git-hash=unknown, git-branch=unknown [2024-08-21 09:40:33,473] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False Using /home/zengzhilin/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/zengzhilin/.cache/torch_extensions/py310_cu121/fused_adam/build.ninja... Building extension module fused_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) Using /home/zengzhilin/.cache/torch_extensions/py310_cu121 as PyTorch extensions root... [1/3] /usr/local/cuda-12.1/bin/nvcc --generate-dependencies-with-compile --dependency-output multi_tensor_adam.cuda.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/TH -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda-12.1/include -isystem /home/zengzhilin/.conda/envs/MM/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -DCUDA_NO_HALF_OPERATORS -DCUDA_NO_HALF_CONVERSIONS -DCUDA_NO_BFLOAT16_CONVERSIONS -DCUDA_NO_HALF2_OPERATORS --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_86,code=sm_86 -gencode=arch=compute_86,code=compute_86 -DBF16_AVAILABLE -UCUDA_NO_BFLOAT16_OPERATORS -UCUDA_NO_BFLOAT162_OPERATORS -std=c++17 -c /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o [2/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/includes -I/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/adam -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/torch/csrc/api/include -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/TH -isystem /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/include/THC -isystem /usr/local/cuda-12.1/include -isystem /home/zengzhilin/.conda/envs/MM/include/python3.10 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++17 -O3 -std=c++17 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -DBF16_AVAILABLE -c /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o [3/3] c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda -ltorch -ltorch_python -L/usr/local/cuda-12.1/lib64 -lcudart -o fused_adam.so Loading extension module fused_adam... Time to load fused_adam op: 28.233609914779663 seconds Loading extension module fused_adam... Time to load fused_adam op: 28.157335996627808 seconds [2024-08-21 09:41:02,139] [INFO] [logging.py:96:log_dist] [Rank 0] Using DeepSpeed Optimizer param name adamw as basic optimizer [2024-08-21 09:41:02,139] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the basic Optimizer [2024-08-21 09:41:02,155] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam [2024-08-21 09:41:02,155] [INFO] [utils.py:56:is_zero_supported_optimizer] Checking ZeRO support for optimizer=FusedAdam type=<class 'deepspeed.ops.adam.fused_adam.FusedAdam'> [2024-08-21 09:41:02,155] [INFO] [logging.py:96:log_dist] [Rank 0] Creating fp16 ZeRO stage 3 optimizer, MiCS is enabled False, Hierarchical params gather False [2024-08-21 09:41:02,155] [INFO] [logging.py:96:log_dist] [Rank 0] Creating torch.bfloat16 ZeRO stage 3 optimizer [2024-08-21 09:41:02,348] [INFO] [utils.py:800:see_memory_usage] Stage 3 initialize beginning [2024-08-21 09:41:02,349] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 3.14 GB CA 2.46 GB Max_CA 3 GB [2024-08-21 09:41:02,349] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.75 GB, percent = 12.5% [2024-08-21 09:41:02,352] [INFO] [stage3.py:130:init] Reduce bucket size 1000000000 [2024-08-21 09:41:02,352] [INFO] [stage3.py:131:init] Prefetch bucket size 1000000000 [2024-08-21 09:41:02,544] [INFO] [utils.py:800:see_memory_usage] DeepSpeedZeRoOffload initialize [begin] [2024-08-21 09:41:02,545] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 2.44 GB CA 2.46 GB Max_CA 2 GB [2024-08-21 09:41:02,545] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.76 GB, percent = 12.5% Parameter Offload: Total persistent parameters: 618697728 in 443 params [2024-08-21 09:41:02,755] [INFO] [utils.py:800:see_memory_usage] DeepSpeedZeRoOffload initialize [end] [2024-08-21 09:41:02,756] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 2.44 GB CA 2.46 GB Max_CA 2 GB [2024-08-21 09:41:02,756] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.76 GB, percent = 12.5% [2024-08-21 09:41:02,933] [INFO] [utils.py:800:see_memory_usage] Before creating fp16 partitions [2024-08-21 09:41:02,934] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 2.44 GB CA 2.46 GB Max_CA 2 GB [2024-08-21 09:41:02,934] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.76 GB, percent = 12.5% [2024-08-21 09:41:05,169] [INFO] [utils.py:800:see_memory_usage] After creating fp16 partitions: 1 [2024-08-21 09:41:05,170] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 2.44 GB CA 3.07 GB Max_CA 3 GB [2024-08-21 09:41:05,170] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.72 GB, percent = 12.5% [2024-08-21 09:41:05,328] [INFO] [utils.py:800:see_memory_usage] Before creating fp32 partitions [2024-08-21 09:41:05,329] [INFO] [utils.py:801:see_memory_usage] MA 2.44 GB Max_MA 2.44 GB CA 3.07 GB Max_CA 3 GB [2024-08-21 09:41:05,330] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.72 GB, percent = 12.5% [2024-08-21 09:41:05,488] [INFO] [utils.py:800:see_memory_usage] After creating fp32 partitions [2024-08-21 09:41:05,489] [INFO] [utils.py:801:see_memory_usage] MA 5.96 GB Max_MA 7.72 GB CA 8.35 GB Max_CA 8 GB [2024-08-21 09:41:05,489] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.72 GB, percent = 12.5% [2024-08-21 09:41:05,672] [INFO] [utils.py:800:see_memory_usage] Before initializing optimizer states [2024-08-21 09:41:05,673] [INFO] [utils.py:801:see_memory_usage] MA 5.96 GB Max_MA 5.96 GB CA 8.35 GB Max_CA 8 GB [2024-08-21 09:41:05,673] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.73 GB, percent = 12.5% [2024-08-21 09:41:05,678] [INFO] [logging.py:96:log_dist] [Rank 0] time (ms) | init_optimizer_state: 0.00 [2024-08-21 09:41:05,870] [INFO] [utils.py:800:see_memory_usage] After initializing optimizer states [2024-08-21 09:41:05,871] [INFO] [utils.py:801:see_memory_usage] MA 5.96 GB Max_MA 9.48 GB CA 11.87 GB Max_CA 12 GB [2024-08-21 09:41:05,871] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.73 GB, percent = 12.5% [2024-08-21 09:41:05,872] [INFO] [stage3.py:486:_setup_for_real_optimizer] optimizer state initialized [2024-08-21 09:41:06,109] [INFO] [utils.py:800:see_memory_usage] After initializing ZeRO optimizer [2024-08-21 09:41:06,110] [INFO] [utils.py:801:see_memory_usage] MA 9.58 GB Max_MA 10.28 GB CA 11.87 GB Max_CA 12 GB [2024-08-21 09:41:06,110] [INFO] [utils.py:808:see_memory_usage] CPU Virtual Memory: used = 15.74 GB, percent = 12.5% [2024-08-21 09:41:06,110] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = adamw [2024-08-21 09:41:06,111] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client callable to create LR scheduler [2024-08-21 09:41:06,111] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <torch.optim.lr_scheduler.LambdaLR object at 0x7f72ea1d2b60> [2024-08-21 09:41:06,111] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0], mom=[[0.9, 0.999]] [2024-08-21 09:41:06,112] [INFO] [config.py:996:print] DeepSpeedEngine configuration: [2024-08-21 09:41:06,112] [INFO] [config.py:1000:print] activation_checkpointing_config { "partition_activations": false, "contiguous_memory_optimization": false, "cpu_checkpointing": false, "number_checkpoints": null, "synchronize_checkpoint_boundary": false, "profile": false } [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] amp_enabled .................. False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] amp_params ................... False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] autotuning_config ............ { "enabled": false, "start_step": null, "end_step": null, "metric_path": null, "arg_mappings": null, "metric": "throughput", "model_info": null, "results_dir": "autotuning_results", "exps_dir": "autotuning_exps", "overwrite": true, "fast": true, "start_profile_step": 3, "end_profile_step": 5, "tuner_type": "gridsearch", "tuner_early_stopping": 5, "tuner_num_trials": 50, "model_info_path": null, "mp_size": 1, "max_train_batch_size": null, "min_train_batch_size": 1, "max_train_micro_batch_size_per_gpu": 1.024000e+03, "min_train_micro_batch_size_per_gpu": 1, "num_tuning_micro_batch_sizes": 3 } [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] bfloat16_enabled ............. True [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] bfloat16_immediate_grad_update False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] checkpoint_parallel_write_pipeline False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] checkpoint_tag_validation_enabled True [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] checkpoint_tag_validation_fail False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f72f83cabc0> [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] communication_data_type ...... None [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] compile_config ............... enabled=False backend='inductor' kwargs={} [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] curriculum_enabled_legacy .... False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] curriculum_params_legacy ..... False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}} [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] data_efficiency_enabled ...... False [2024-08-21 09:41:06,113] [INFO] [config.py:1000:print] dataloader_drop_last ......... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] disable_allgather ............ False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] dump_state ................... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] dynamic_loss_scale_args ...... None [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_enabled ........... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_gas_boundary_resolution 1 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_layer_name ........ bert.encoder.layer [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_layer_num ......... 0 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_max_iter .......... 100 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_stability ......... 1e-06 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_tol ............... 0.01 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] eigenvalue_verbose ........... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] elasticity_enabled ........... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] flops_profiler_config ........ { "enabled": false, "recompute_fwd_factor": 0.0, "profile_step": 1, "module_depth": -1, "top_modules": 1, "detailed": true, "output_file": null } [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] fp16_auto_cast ............... None [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] fp16_enabled ................. False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] fp16_master_weights_and_gradients False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] global_rank .................. 0 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] grad_accum_dtype ............. None [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] gradient_accumulation_steps .. 2 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] gradient_clipping ............ 1.0 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] gradient_predivide_factor .... 1.0 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] graph_harvesting ............. False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] initial_dynamic_scale ........ 1 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] load_universal_checkpoint .... False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] loss_scale ................... 1.0 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] memory_breakdown ............. False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] mics_hierarchial_params_gather False [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] mics_shard_size .............. -1 [2024-08-21 09:41:06,114] [INFO] [config.py:1000:print] monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] nebula_config ................ { "enabled": false, "persistent_storage_path": null, "persistent_time_interval": 100, "num_of_version_in_retention": 2, "enable_nebula_load": true, "load_path": null } [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] optimizer_legacy_fusion ...... False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] optimizer_name ............... adamw [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] optimizer_params ............. {'lr': 4e-08, 'betas': [0.9, 0.999], 'eps': 1e-08, 'weight_decay': 0.01} [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0, 'pipe_partitioned': True, 'grad_partitioned': True} [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] pld_enabled .................. False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] pld_params ................... False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] prescale_gradients ........... False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] scheduler_name ............... None [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] scheduler_params ............. None [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] seq_parallel_communication_data_type torch.float32 [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] sparse_attention ............. None [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] sparse_gradients_enabled ..... False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] steps_per_print .............. inf [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] train_batch_size ............. 8 [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] train_micro_batch_size_per_gpu 2 [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] use_data_before_expertparallel False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] use_node_local_storage ....... False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] wall_clock_breakdown ......... True [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] weight_quantization_config ... None [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] world_size ................... 2 [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] zero_allow_untested_optimizer False [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] zero_config .................. stage=3 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=1000000000 use_multi_rank_bucket_allreduce=True allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=True load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=1000000000 param_persistence_threshold=10000000 model_persistence_threshold=sys.maxsize max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=True stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False zero_hpz_partition_size=1 zero_quantized_weights=False zero_quantized_nontrainable_weights=False zero_quantized_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=True pipeline_loading_checkpoint=False override_module_apply=True [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] zero_enabled ................. True [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] zero_force_ds_cpu_optimizer .. True [2024-08-21 09:41:06,115] [INFO] [config.py:1000:print] zero_optimization_stage ...... 3 [2024-08-21 09:41:06,116] [INFO] [config.py:986:print_user_config] json = { "zero_optimization": { "stage": 3, "overlap_comm": true, "contiguous_gradients": true, "sub_group_size": 1.000000e+09, "reduce_bucket_size": 1.000000e+09, "stage3_prefetch_bucket_size": 1.000000e+09, "stage3_param_persistence_threshold": 1.000000e+07, "stage3_max_live_parameters": 1.000000e+09, "stage3_max_reuse_distance": 1.000000e+09, "stage3_gather_16bit_weights_on_model_save": true }, "fp16": { "enabled": false, "auto_cast": true, "loss_scale": 0, "initial_scale_power": 32, "loss_scale_window": 1000, "hysteresis": 2, "min_loss_scale": 1 }, "bf16": { "enabled": true }, "optimizer": { "type": "AdamW", "params": { "lr": 4e-08, "betas": [0.9, 0.999], "eps": 1e-08, "weight_decay": 0.01 } }, "gradient_accumulation_steps": 2, "gradient_clipping": 1.0, "steps_per_print": inf, "train_batch_size": 8, "train_micro_batch_size_per_gpu": 2, "wall_clock_breakdown": true } [INFO|trainer.py:1721] 2024-08-21 09:41:06,116 >> Running training [INFO|trainer.py:1722] 2024-08-21 09:41:06,116 >> Num examples = 500,736 [INFO|trainer.py:1723] 2024-08-21 09:41:06,116 >> Num Epochs = 1 [INFO|trainer.py:1724] 2024-08-21 09:41:06,116 >> Instantaneous batch size per device = 2 [INFO|trainer.py:1727] 2024-08-21 09:41:06,116 >> Total train batch size (w. parallel, distributed & accumulation) = 8 [INFO|trainer.py:1728] 2024-08-21 09:41:06,116 >> Gradient Accumulation steps = 2 [INFO|trainer.py:1729] 2024-08-21 09:41:06,116 >> Total optimization steps = 62,592 [INFO|trainer.py:1730] 2024-08-21 09:41:06,118 >> Number of trainable parameters = 1,889,146,880 0%| | 0/62592 [00:00<?, ?it/s][2024-08-21 09:41:09,655] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-08-21 09:41:09,664] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): [2024-08-21 09:41:15,436] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-08-21 09:41:15,442] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): [2024-08-21 09:41:21,203] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) [2024-08-21 09:41:21,210] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): [2024-08-21 09:41:27,027] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): [2024-08-21 09:41:27,139] [INFO] [real_accelerator.py:191:get_accelerator] Setting ds_accelerator to cuda (auto detect) /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:49: FutureWarning: torch.cuda.amp.custom_fwd(args...) is deprecated. Please use torch.amp.custom_fwd(args..., device_type='cuda') instead. def forward(ctx, input, weight, bias=None): /home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/linear.py:67: FutureWarning: torch.cuda.amp.custom_bwd(args...) is deprecated. Please use torch.amp.custom_bwd(args..., device_type='cuda') instead. def backward(ctx, grad_output): dynamic ViT batch size: 13, images per sample: 6.5, dynamic token length: 2156 [2024-08-21 09:41:35,086] [INFO] [logging.py:96:log_dist] [Rank 0] time (ms) | fwd_microstep: 1766.71 | bwd_microstep: 1551.58 | bwd_inner_microstep: 1395.00 | bwd_allreduce_microstep: 156.33 | step_microstep: 0.10 dynamic ViT batch size: 13, images per sample: 6.5, dynamic token length: 2224 [2024-08-21 09:41:38,799] [INFO] [logging.py:96:log_dist] [Rank 0] time (ms) | optimizer_step: 463.33 [2024-08-21 09:41:38,800] [WARNING] [stage3.py:2069:step] 3 pytorch allocator cache flushes since last step. this happens when there is high memory pressure and is detrimental to performance. if this is happening frequently consider adjusting settings to reduce memory consumption. If you are unable to make the cache flushes go away consider adding get_accelerator().empty_cache() calls in your training loop to ensure that all ranks flush their caches at the same time [2024-08-21 09:41:38,801] [INFO] [logging.py:96:log_dist] [Rank 0] time (ms) | fwd_microstep: 1773.68 | bwd_microstep: 1181.49 | bwd_inner_microstep: 1022.70 | bwd_allreduce_microstep: 158.61 | step_microstep: 741.96 [2024-08-21 09:41:38,802] [INFO] [logging.py:96:log_dist] [Rank 0] time (ms) | fwd: 3540.32 | bwd: 2733.09 | bwd_inner: 2417.79 | bwd_allreduce: 314.94 | step: 742.08 {'loss': 1.3492, 'learning_rate': 2.129925452609159e-11, 'epoch': 0.0} 0%| | 1/62592 [00:32<566:38:54, 32.59s/it]petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. petrel_client is not installed. If you read data locally instead of from ceph, ignore it. Replace train sampler!! petrel_client is not installed. Using PIL to load images. rank1: Traceback (most recent call last): rank1: File "/home/zengzhilin/MM/project/mini_monkey/internvl/train/minimonkey_chat_finetune.py", line 850, in

rank1: File "/home/zengzhilin/MM/project/mini_monkey/internvl/train/minimonkey_chat_finetune.py", line 835, in main rank1: train_result = trainer.train(resume_from_checkpoint=checkpoint) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/transformers/trainer.py", line 1539, in train rank1: return inner_training_loop( rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop rank1: tr_loss_step = self.training_step(model, inputs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/transformers/trainer.py", line 2772, in training_step rank1: loss = self.compute_loss(model, inputs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/transformers/trainer.py", line 2795, in compute_loss rank1: outputs = model(inputs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl rank1: return self._call_impl(*args, *kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl rank1: return forward_call(args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn rank1: ret_val = func(*args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/engine.py", line 1852, in forward rank1: loss = self.module(*inputs, *kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl rank1: return self._call_impl(args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1603, in _call_impl rank1: result = forward_call(*args, kwargs) rank1: File "/home/zengzhilin/MM/project/mini_monkey/internvl/model/internvl_chat/modeling_minimonkey_chat.py", line 152, in forward rank1: input_embeds = self.language_model.get_input_embeddings()(input_ids).clone() rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl rank1: return self._call_impl(*args, *kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1592, in _call_impl rank1: args_result = hook(self, args) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn rank1: ret_val = func(args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 278, in _pre_forward_module_hook

rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context rank1: return func(*args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/parameter_offload.py", line 452, in pre_sub_module_forward_function rank1: param_coordinator.fetch_sub_module(sub_module, forward=True) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 600, in _fn rank1: return fn(*args, *kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn rank1: ret_val = func(args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context rank1: return func(*args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 385, in fetch_sub_module rank1: self.all_gather_params(params_to_prefetch, forward) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn rank1: ret_val = func(*args, **kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 434, in all_gather_params rank1: self.all_gatherparams(nonquantized_params, forward, quantize=self.zero_quantized_weights) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/partitioned_param_coordinator.py", line 463, in all_gatherparams rank1: handle = param_group[0].all_gather_coalesced(param_group, quantize=quantize) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn rank1: ret_val = func(*args, kwargs) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1256, in all_gather_coalesced rank1: handles.append(_all_gather_dtype(dtype, params, world_size, rank_in_group, ds_process_group)) rank1: File "/home/zengzhilin/.conda/envs/MM/lib/python3.10/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 1128, in _all_gather_dtype rank1: flat_tensor = torch.empty(partition_sz * world_size, rank1: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.07 GiB. GPU 1 has a total capacity of 23.68 GiB of which 586.69 MiB is free. Including non-PyTorch memory, this process has 23.11 GiB memory in use. Of the allocated memory 21.60 GiB is allocated by PyTorch, and 1.08 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

internvl/train/minimonkey_chat_finetune.py FAILED `

caolingen2022 commented 2 months ago

你这是官方提供的数据还是自己的数据

TongkunGuan commented 2 months ago

你这是官方提供的数据还是自己的数据

按照mini-monkey-readme 下载的数据

mxin262 commented 2 months ago

Hi~, does this problem occur when using eight 3090?

TongkunGuan commented 2 months ago

Hi~, does this problem occur when using eight 3090?

Thanks~, I have addressed the issue.