microsoft / DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
https://www.deepspeed.ai/
Apache License 2.0
34.63k stars 4.04k forks source link

[BUG] Invalidate trace cache @ step 1: expected module 25, but got module 323,how to resolve it ? #5006

Open awzhgw opened 7 months ago

awzhgw commented 7 months ago

Describe the bug A clear and concise description of what the bug is.

  1. I train mixtral 7Bx8 model , tain 270 step, it will be hang , after 30m , NCCL timeout ,process will be killed

Invalidate trace cache @ step 1: expected module 25, but got module 323

  1. deepspeed version is : deepspeed 0.13.1

  2. code is :

config = transformers.AutoConfig.from_pretrained(model_args.model_name_or_path)
            config.num_hidden_layers = 2
            model = MixtralForCausalLM.from_pretrained(
                model_args.model_name_or_path,
                config=config,
                cache_dir=training_args.cache_dir,
                **bnb_model_from_pretrained_args
            )
deepspeed.utils.set_z3_leaf_modules(model, [MixtralSparseMoeBlock])
  1. my deepspeed config is :
{
  "fp16": {
    "enabled": "auto",
    "loss_scale": 0,
    "loss_scale_window": 1000,
    "initial_scale_power": 16,
    "hysteresis": 2,
    "min_loss_scale": 1
  },
  "bf16": {
    "enabled": "auto"
  },
  "optimizer": {
    "type": "AdamW",
    "params": {
      "lr": "auto",
      "betas": "auto",
      "eps": "auto",
      "weight_decay": "auto"
    }
  },
  "scheduler": {
    "type": "WarmupLR",
    "params": {
      "warmup_min_lr": "auto",
      "warmup_max_lr": "auto",
      "warmup_num_steps": "auto"
    }
  },
  "zero_optimization": {
    "stage": 3,
    "offload_optimizer": {
      "device": "cpu",
      "pin_memory": true
    },
    "offload_param": {
      "device": "cpu",
      "pin_memory": true
    },
    "overlap_comm": true,
    "contiguous_gradients": true,
    "sub_group_size": 1e9,
    "reduce_bucket_size": "auto",
    "stage3_prefetch_bucket_size": "auto",
    "stage3_param_persistence_threshold": "auto",
    "stage3_max_live_parameters": 1e9,
    "stage3_max_reuse_distance": 1e9,
    "gather_16bit_weights_on_model_save": true
  },
  "gradient_accumulation_steps": "auto",
  "gradient_clipping": "auto",
  "train_batch_size": "auto",
  "train_micro_batch_size_per_gpu": "auto",
  "steps_per_print": 1e5,
  "wall_clock_breakdown": false
}
### Tasks
JakobLS commented 6 months ago

Hi @tohtana,

I get a similar error when using DeepSpeed via Hugging Face Accelerate to train SDXL. It happens during evaluation after the first epoch where the training simply freezes:

Invalidate trace cache @ step 4: expected module 1928, but got module 6

My deepspeed config is as follows:

{
    "fp16": {
        "enabled": true, 
        "auto_cast": true,
        "initial_scale_power": 16
    }, 
    "bf16": {
        "enabled": false
    },
    "zero_optimization": {
        "stage": 3,
        "round_robin_gradients": false,
        "load_from_fp32_weights": false,
        "allgather_bucket_size": 5e8,
        "reduce_bucket_size": 5e8,
        "stage3_gather_16bit_weights_on_model_save": true,
        "zero_quantized_weights": false,
        "zero_hpz_partition_size": 1,
        "zero_quantized_gradients": true
    },
    "gradient_clipping": 1.0,
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "gradient_accumulation_steps": "auto"
}

Using the following library versions:

accelerate==0.27.2
deepspeed==0.13.4
diffusers==0.27.0.dev0 
torch==2.1.1+cu118
liuchengyuan123 commented 6 months ago

Hi @tohtana,

I get a similar error when using DeepSpeed via Hugging Face Accelerate to train SDXL. It happens during evaluation after the first epoch where the training simply freezes:

Invalidate trace cache @ step 4: expected module 1928, but got module 6

My deepspeed config is as follows:

{
    "fp16": {
        "enabled": true, 
        "auto_cast": true,
        "initial_scale_power": 16
    }, 
    "bf16": {
        "enabled": false
    },
    "zero_optimization": {
        "stage": 3,
        "round_robin_gradients": false,
        "load_from_fp32_weights": false,
        "allgather_bucket_size": 5e8,
        "reduce_bucket_size": 5e8,
        "stage3_gather_16bit_weights_on_model_save": true,
        "zero_quantized_weights": false,
        "zero_hpz_partition_size": 1,
        "zero_quantized_gradients": true
    },
    "gradient_clipping": 1.0,
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "gradient_accumulation_steps": "auto"
}

Using the following library versions:

accelerate==0.27.2
deepspeed==0.13.4
diffusers==0.27.0.dev0 
torch==2.1.1+cu118

same!

Sander-houqi commented 4 months ago

same , and not use MixtralForCausalLM , use Qwen2ForCausalLM (without MOE) have warning, not break the training process.

even disable the trace cache, also have warning, but not break.

{
  "fp16": {
      "enabled": "auto",
      "loss_scale": 0,
      "loss_scale_window": 1000,
      "initial_scale_power": 16,
      "hysteresis": 2,
      "min_loss_scale": 1
  },
  "bf16": {
      "enabled": "auto"
  },
  "train_micro_batch_size_per_gpu": "auto",
  "train_batch_size": "auto",
  "gradient_accumulation_steps": "auto",
  "zero_optimization": {
      "stage": 3,
      "overlap_comm": true,
      "contiguous_gradients": true,
      "sub_group_size": 1e9,
      "reduce_bucket_size": 5e8,
      "stage3_prefetch_bucket_size": 0,
      "stage3_param_persistence_threshold": 1e6,
      "stage3_max_live_parameters": 0,
      "stage3_max_reuse_distance": 0,
      "stage3_gather_16bit_weights_on_model_save": true
  }
}
vikram71198 commented 4 months ago

I'm facing the same issue as @JakobLS.

After the first epoch, I get the message Invalidate trace cache @ step 0: expected module 0, but got module 456 and then training simply freezes and does not proceed.

chenyunsai commented 4 months ago

i hava the same issue,so there are a solve way?

sxhysj commented 2 months ago

Same issue, my deepspeed config is:

{
    "fp16": {
        "enabled": "auto",
        "loss_scale": 0,
        "loss_scale_window": 1000,
        "initial_scale_power": 16,
        "hysteresis": 2,
        "min_loss_scale": 1
    },

    "scheduler": {
        "type": "WarmupLR",
        "params": {
            "warmup_min_lr": 0,
            "warmup_max_lr": "auto",
            "warmup_num_steps": "auto"
        }
    },

    "zero_optimization": {
        "stage": 3,
        "offload_optimizer": {
           "device": "nvme",
            "pin_memory": true,
            "nvme_path": "/home/xxx/git/sep/tmp",
            "buffer_count": 40
        },
        "offload_param": {
            "device": "cpu",
            "pin_memory": true,
            "nvme_path": "/home/xxx/git/sep/tmp2",
            "buffer_count": 40
        },
        "overlap_comm": true,
        "contiguous_gradients": true,
        "sub_group_size": 1e9,
        "reduce_bucket_size": "auto",
        "stage3_prefetch_bucket_size": "auto",
        "stage3_param_persistence_threshold": "auto",
        "stage3_max_live_parameters": 1e9,
        "stage3_max_reuse_distance": 1e9,
        "stage3_gather_16bit_weights_on_model_save": "auto",
        "reduce_bucket_size": 1e6
    },

    "gradient_accumulation_steps": "auto",
    "gradient_clipping": "auto",
    "steps_per_print": 2000,
    "train_batch_size": "auto",
    "train_micro_batch_size_per_gpu": "auto",
    "wall_clock_breakdown": false
}
Griffintaur commented 2 months ago

@tohtana Any insights how to debug this to find out if the issue is with code or configuration

tjruwase commented 2 months ago

@Griffintaur, can you please see if this new API can help? https://github.com/microsoft/DeepSpeed/pull/4966