microsoft / DeepSpeedExamples

Example models using DeepSpeed
Apache License 2.0
6.08k stars 1.04k forks source link

CUDA OOM when running DeepSpeed-Chat step 1 with opt-1.3b and one A100-40G #551

Closed treya-lin closed 1 year ago

treya-lin commented 1 year ago

Hi I am trying out the step 1 of deepspeed-chat with the default example. I have one A100-40G, with torch1.12.1-cuda11.3-cudnn8 and deepspeed==0.9.2 in my local environment. I ran into a CUDA OOM error.

Command:

 bash ./training_scripts/single_gpu/run_1.3b.sh

script content:(I didn't change anything)

# Note that usually LoRA needs to use larger learning rate
OUTPUT=$1
ZERO_STAGE=$2
if [ "$OUTPUT" == "" ]; then
    OUTPUT=./output
fi
if [ "$ZERO_STAGE" == "" ]; then
    ZERO_STAGE=0
fi
mkdir -p $OUTPUT

deepspeed --num_gpus 1 main.py --model_name_or_path facebook/opt-1.3b \
   --gradient_accumulation_steps 8 --lora_dim 128 --zero_stage $ZERO_STAGE \
   --deepspeed --output_dir $OUTPUT &> $OUTPUT/training.log

For the model (opt-1.3b) and data (Dahoas/rm-static), I downloaded them in advance and used soft links in the project directory. Seems the script can detect them.

DeepSpeedExamples/applications/DeepSpeed-Chat/training/step1_supervised_finetuning# ls {facebook,Dahoas}
Dahoas:
rm-static

facebook:
opt-1.3b  opt-350m

In the training log it seems it still downloaded something else (but I can't see what it is?). And finally it stopped due to CUDA OOM error. But my GPU is one A100-40G, according to the documentation it should be working properly? May I know what I did wrong and how to fix it? Thank you!

Full log:

[2023-05-26 08:16:23,375] [WARNING] [runner.py:191:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2023-05-26 08:16:23,404] [INFO] [runner.py:541:main] cmd = /opt/conda/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMF19 --master_addr=127.0.0.1 --master_port=29500 --enable_each_rank_log=None main.py --model_name_or_path facebook/opt-1.3b --gradient_accumulation_steps 8 --lora_dim 128 --zero_stage 0 --deepspeed --output_dir ./output
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_DEV_PACKAGE=libnccl-dev=2.9.9-1+cuda11.3
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NCCL_VERSION=2.9.9-1
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_PACKAGE_VERSION=2.9.9-1
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_PACKAGE=libnccl2=2.9.9-1+cuda11.3
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_DEV_PACKAGE_NAME=libnccl-dev
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_PACKAGE_NAME=libnccl2
[2023-05-26 08:16:24,917] [INFO] [launch.py:222:main] 0 NV_LIBNCCL_DEV_PACKAGE_VERSION=2.9.9-1
[2023-05-26 08:16:24,917] [INFO] [launch.py:229:main] WORLD INFO DICT: {'localhost': [0]}
[2023-05-26 08:16:24,917] [INFO] [launch.py:235:main] nnodes=1, num_local_procs=1, node_rank=0
[2023-05-26 08:16:24,917] [INFO] [launch.py:246:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0]})
[2023-05-26 08:16:24,917] [INFO] [launch.py:247:main] dist_world_size=1
[2023-05-26 08:16:24,917] [INFO] [launch.py:249:main] Setting CUDA_VISIBLE_DEVICES=0
[2023-05-26 08:16:27,076] [INFO] [comm.py:622:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
Downloading and preparing dataset None/None to /root/.cache/huggingface/datasets/parquet/default-047fdfdfe6d5967d/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...

Downloading data files:   0%|          | 0/2 [00:00<?, ?it/s]
Downloading data files: 100%|██████████| 2/2 [00:00<00:00, 12846.26it/s]

Extracting data files:   0%|          | 0/2 [00:00<?, ?it/s]
Extracting data files: 100%|██████████| 2/2 [00:00<00:00, 2385.16it/s]

Generating train split:   0%|          | 0/76256 [00:00<?, ? examples/s]
Generating train split:  66%|██████▌   | 50000/76256 [00:00<00:00, 405710.88 examples/s]

Generating test split:   0%|          | 0/5103 [00:00<?, ? examples/s]

Dataset parquet downloaded and prepared to /root/.cache/huggingface/datasets/parquet/default-047fdfdfe6d5967d/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec. Subsequent calls will reuse this data.

  0%|          | 0/2 [00:00<?, ?it/s]
100%|██████████| 2/2 [00:00<00:00, 822.17it/s]
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Using /root/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Creating extension directory /root/.cache/torch_extensions/py37_cu113/fused_adam...
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Detected CUDA files, patching ldflags
Emitting ninja build file /root/.cache/torch_extensions/py37_cu113/fused_adam/build.ninja...
Building extension module fused_adam...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[1/3] c++ -MMD -MF fused_adam_frontend.o.d -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/includes -I/opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/adam -isystem /opt/conda/lib/python3.7/site-packages/torch/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -O3 -std=c++14 -g -Wno-reorder -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -c /opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/adam/fused_adam_frontend.cpp -o fused_adam_frontend.o 
[2/3] /usr/local/cuda/bin/nvcc  -DTORCH_EXTENSION_NAME=fused_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/includes -I/opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/adam -isystem /opt/conda/lib/python3.7/site-packages/torch/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.7/site-packages/torch/include/THC -isystem /usr/local/cuda/include -isystem /opt/conda/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_80,code=sm_80 --compiler-options '-fPIC' -O3 -DVERSION_GE_1_1 -DVERSION_GE_1_3 -DVERSION_GE_1_5 -lineinfo --use_fast_math -gencode=arch=compute_80,code=sm_80 -gencode=arch=compute_80,code=compute_80 -std=c++14 -c /opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/adam/multi_tensor_adam.cu -o multi_tensor_adam.cuda.o 
[3/3] c++ fused_adam_frontend.o multi_tensor_adam.cuda.o -shared -L/opt/conda/lib/python3.7/site-packages/torch/lib -lc10 -lc10_cuda -ltorch_cpu -ltorch_cuda_cu -ltorch_cuda_cpp -ltorch -ltorch_python -L/usr/local/cuda/lib64 -lcudart -o fused_adam.so
Loading extension module fused_adam...
Time to load fused_adam op: 16.51078462600708 seconds
[2023-05-26 08:17:15,591] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed info: version=0.9.2, git-hash=unknown, git-branch=unknown
[2023-05-26 08:17:15,591] [INFO] [comm.py:616:init_distributed] Distributed backend already initialized
[2023-05-26 08:17:16,374] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False
[2023-05-26 08:17:16,376] [INFO] [logging.py:96:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer
[2023-05-26 08:17:16,376] [INFO] [logging.py:96:log_dist] [Rank 0] Using client Optimizer as basic optimizer
[2023-05-26 08:17:16,417] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam
[2023-05-26 08:17:16,417] [INFO] [logging.py:96:log_dist] [Rank 0] Creating fp16 optimizer with dynamic loss scale
[2023-05-26 08:17:16,446] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam
[2023-05-26 08:17:16,446] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed using client LR scheduler
[2023-05-26 08:17:16,446] [INFO] [logging.py:96:log_dist] [Rank 0] DeepSpeed LR Scheduler = <torch.optim.lr_scheduler.LambdaLR object at 0x7f040023a410>
[2023-05-26 08:17:16,447] [INFO] [logging.py:96:log_dist] [Rank 0] step=0, skipped=0, lr=[0.001, 0.001], mom=[(0.9, 0.95), (0.9, 0.95)]
[2023-05-26 08:17:16,447] [INFO] [config.py:955:print] DeepSpeedEngine configuration:
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   activation_checkpointing_config  {
    "partition_activations": false, 
    "contiguous_memory_optimization": false, 
    "cpu_checkpointing": false, 
    "number_checkpoints": null, 
    "synchronize_checkpoint_boundary": false, 
    "profile": false
}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   amp_enabled .................. False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   amp_params ................... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   autotuning_config ............ {
    "enabled": false, 
    "start_step": null, 
    "end_step": null, 
    "metric_path": null, 
    "arg_mappings": null, 
    "metric": "throughput", 
    "model_info": null, 
    "results_dir": "autotuning_results", 
    "exps_dir": "autotuning_exps", 
    "overwrite": true, 
    "fast": true, 
    "start_profile_step": 3, 
    "end_profile_step": 5, 
    "tuner_type": "gridsearch", 
    "tuner_early_stopping": 5, 
    "tuner_num_trials": 50, 
    "model_info_path": null, 
    "mp_size": 1, 
    "max_train_batch_size": null, 
    "min_train_batch_size": 1, 
    "max_train_micro_batch_size_per_gpu": 1.024000e+03, 
    "min_train_micro_batch_size_per_gpu": 1, 
    "num_tuning_micro_batch_sizes": 3
}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   bfloat16_enabled ............. False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   checkpoint_parallel_write_pipeline  False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   checkpoint_tag_validation_enabled  True
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   checkpoint_tag_validation_fail  False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   comms_config ................. <deepspeed.comm.config.DeepSpeedCommsConfig object at 0x7f036994e550>
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   communication_data_type ...... None
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   curriculum_enabled_legacy .... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   curriculum_params_legacy ..... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   data_efficiency_config ....... {'enabled': False, 'seed': 1234, 'data_sampling': {'enabled': False, 'num_epochs': 1000, 'num_workers': 0, 'curriculum_learning': {'enabled': False}}, 'data_routing': {'enabled': False, 'random_ltd': {'enabled': False, 'layer_token_lr_schedule': {'enabled': False}}}}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   data_efficiency_enabled ...... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   dataloader_drop_last ......... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   disable_allgather ............ False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   dump_state ................... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   dynamic_loss_scale_args ...... {'init_scale': 65536, 'scale_window': 100, 'delayed_shift': 2, 'min_scale': 1}
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   eigenvalue_enabled ........... False
[2023-05-26 08:17:16,448] [INFO] [config.py:959:print]   eigenvalue_gas_boundary_resolution  1
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_layer_name ........ bert.encoder.layer
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_layer_num ......... 0
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_max_iter .......... 100
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_stability ......... 1e-06
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_tol ............... 0.01
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   eigenvalue_verbose ........... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   elasticity_enabled ........... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   flops_profiler_config ........ {
    "enabled": false, 
    "profile_step": 1, 
    "module_depth": -1, 
    "top_modules": 1, 
    "detailed": true, 
    "output_file": null
}
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   fp16_auto_cast ............... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   fp16_enabled ................. True
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   fp16_master_weights_and_gradients  False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   global_rank .................. 0
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   grad_accum_dtype ............. None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   gradient_accumulation_steps .. 8
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   gradient_clipping ............ 1.0
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   gradient_predivide_factor .... 1.0
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   hybrid_engine ................ enabled=False max_out_tokens=512 inference_tp_size=1 release_inference_cache=False pin_parameters=True tp_gather_partition_size=8
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   initial_dynamic_scale ........ 65536
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   load_universal_checkpoint .... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   loss_scale ................... 0
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   memory_breakdown ............. False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   mics_hierarchial_params_gather  False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   mics_shard_size .............. -1
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   monitor_config ............... tensorboard=TensorBoardConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') wandb=WandbConfig(enabled=False, group=None, team=None, project='deepspeed') csv_monitor=CSVConfig(enabled=False, output_path='', job_name='DeepSpeedJobName') enabled=False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   nebula_config ................ {
    "enabled": false, 
    "persistent_storage_path": null, 
    "persistent_time_interval": 100, 
    "num_of_version_in_retention": 2, 
    "enable_nebula_load": true, 
    "load_path": null
}
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   optimizer_legacy_fusion ...... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   optimizer_name ............... None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   optimizer_params ............. None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0}
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   pld_enabled .................. False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   pld_params ................... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   prescale_gradients ........... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   scheduler_name ............... None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   scheduler_params ............. None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   sparse_attention ............. None
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   sparse_gradients_enabled ..... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   steps_per_print .............. 10
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   train_batch_size ............. 128
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   train_micro_batch_size_per_gpu  16
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   use_node_local_storage ....... False
[2023-05-26 08:17:16,449] [INFO] [config.py:959:print]   wall_clock_breakdown ......... False
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   world_size ................... 1
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   zero_allow_untested_optimizer  False
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500,000,000 allgather_partitions=True allgather_bucket_size=500,000,000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=DeepSpeedZeroOffloadParamConfig(device='none', nvme_path=None, buffer_count=5, buffer_size=100,000,000, max_in_cpu=1,000,000,000, pin_memory=False) offload_optimizer=DeepSpeedZeroOffloadOptimizerConfig(device='none', nvme_path=None, buffer_count=4, pin_memory=False, pipeline=False, pipeline_read=False, pipeline_write=False, fast_init=False) sub_group_size=1,000,000,000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=30000000 param_persistence_threshold=10000 model_persistence_threshold=sys.maxsize max_live_parameters=30000000 max_reuse_distance=1,000,000,000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False mics_shard_size=-1 mics_hierarchical_params_gather=False memory_efficient_linear=False
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   zero_enabled ................. False
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   zero_force_ds_cpu_optimizer .. True
[2023-05-26 08:17:16,450] [INFO] [config.py:959:print]   zero_optimization_stage ...... 0
[2023-05-26 08:17:16,450] [INFO] [config.py:951:print_user_config]   json = {
    "train_batch_size": 128, 
    "train_micro_batch_size_per_gpu": 16, 
    "steps_per_print": 10, 
    "zero_optimization": {
        "stage": 0, 
        "offload_param": {
            "device": "none"
        }, 
        "offload_optimizer": {
            "device": "none"
        }, 
        "stage3_param_persistence_threshold": 1.000000e+04, 
        "stage3_max_live_parameters": 3.000000e+07, 
        "stage3_prefetch_bucket_size": 3.000000e+07, 
        "memory_efficient_linear": false
    }, 
    "fp16": {
        "enabled": true, 
        "loss_scale_window": 100
    }, 
    "gradient_clipping": 1.0, 
    "prescale_gradients": false, 
    "wall_clock_breakdown": false, 
    "hybrid_engine": {
        "enabled": false, 
        "max_out_tokens": 512, 
        "inference_tp_size": 1, 
        "release_inference_cache": false, 
        "pin_parameters": true, 
        "tp_gather_partition_size": 8
    }
}
Using /root/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Creating extension directory /root/.cache/torch_extensions/py37_cu113/utils...
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Emitting ninja build file /root/.cache/torch_extensions/py37_cu113/utils/build.ninja...
Building extension module utils...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
    - Avoid using `tokenizers` before the fork if possible
    - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
[1/2] c++ -MMD -MF flatten_unflatten.o.d -DTORCH_EXTENSION_NAME=utils -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -isystem /opt/conda/lib/python3.7/site-packages/torch/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /opt/conda/lib/python3.7/site-packages/torch/include/TH -isystem /opt/conda/lib/python3.7/site-packages/torch/include/THC -isystem /opt/conda/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -c /opt/conda/lib/python3.7/site-packages/deepspeed/ops/csrc/utils/flatten_unflatten.cpp -o flatten_unflatten.o 
[2/2] c++ flatten_unflatten.o -shared -L/opt/conda/lib/python3.7/site-packages/torch/lib -lc10 -ltorch_cpu -ltorch -ltorch_python -o utils.so
Loading extension module utils...
Time to load utils op: 12.50208592414856 seconds
***** Running training *****
***** Evaluating perplexity, Epoch 0/1 *****
ppl: 4391.341796875
Beginning of Epoch 1/1, Total Micro Batches 954
Traceback (most recent call last):
  File "main.py", line 343, in <module>
    main()
  File "main.py", line 314, in main
    outputs = model(**batch, use_cache=False)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/deepspeed/utils/nvtx.py", line 15, in wrapped_fn
    ret_val = func(*args, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/deepspeed/runtime/engine.py", line 1724, in forward
    loss = self.module(*inputs, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/transformers/models/opt/modeling_opt.py", line 947, in forward
    return_dict=return_dict,
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/transformers/models/opt/modeling_opt.py", line 710, in forward
    use_cache=use_cache,
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/transformers/models/opt/modeling_opt.py", line 334, in forward
    output_attentions=output_attentions,
  File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/conda/lib/python3.7/site-packages/transformers/models/opt/modeling_opt.py", line 224, in forward
    attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
RuntimeError: CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 39.59 GiB total capacity; 38.23 GiB already allocated; 42.94 MiB free; 38.40 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2023-05-26 08:17:46,012] [INFO] [launch.py:428:sigkill_handler] Killing subprocess 374
[2023-05-26 08:17:46,012] [ERROR] [launch.py:434:sigkill_handler] ['/opt/conda/bin/python', '-u', 'main.py', '--local_rank=0', '--model_name_or_path', 'facebook/opt-1.3b', '--gradient_accumulation_steps', '8', '--lora_dim', '128', '--zero_stage', '0', '--deepspeed', '--output_dir', './output'] exits with return code = 1
treya-lin commented 1 year ago

This can be solved by reducing batch_size and increasing gradient_accumulate_steps