microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.61k stars 4.04k forks source link

[BUG] Zero3 causes AttributeError: 'NoneType' object has no attribute 'numel' in continual training #5602

Closed thkimYonsei closed 1 month ago

thkimYonsei commented 3 months ago

I was training LLaVA model using deepspeed zero3. What I want to do is continually train the model to different datasets. I create LLaVA model and in the for-loop, I create new dataset and new trainer, then calls trainer.train(). At the first iteration of the for-loop, the training works properly. However, at the second iteration, I got 1 warning and 1 error: warning: "Invalidate trace cache @ step XX: expected module XX, but got module XX" error: AttributeError: 'NoneType' object has no attribute 'numel' at the same location as this issue

Screenshot: image

image

But when I simply change the deepspeed config to use zero2 instead of zero3, no error occurs. I want to use zero3 for larger batch size training. Can you help me out with this?

System info

zero3.json that I used:

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    "bf16": {
        "enabled": "auto"
    },
    "train_micro_batch_size_per_gpu": "auto",
    "train_batch_size": "auto",
    "gradient_accumulation_steps": "auto",
    "zero_optimization": {
        "stage": 3,
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": true
    }
}

zero2.json

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },
    "bf16": {
        "enabled": "auto"
    },
    "train_micro_batch_size_per_gpu": "auto",
    "train_batch_size": "auto",
    "gradient_accumulation_steps": "auto",
    "zero_optimization": {
        "stage": 2,
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto"
    }
}
Xirid commented 3 months ago

I have the same issue and solved it by just reloading the whole model on every iteration. Though now I am getting oom due to the memory not freeing after the second epoch, but I guess that is a different issue.

tjruwase commented 3 months ago

@Xirid, please try the following API to free engine memory https://deepspeed.readthedocs.io/en/latest/zero3.html#gpu-memory-management

apToll commented 2 months ago

我也遇到了在 for 循环中创建新的数据集和新的训练器,然后调用trainer.train()。 在 for 循环的第一次迭代中,训练工作正常。然而,在第二次迭代中,报错为错误: AttributeError: 'NoneType' object has no attribute 'numel' 尝试释放GPU,但没有用,释放GPU如下:# Free GPU memory consumed by model parameters ds_engine.empty_partition_cache()

tjruwase commented 1 month ago

Closing this issue due to lack of response. Please reopen if needed.