microsoft / DeepSpeed-MII

MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.
Apache License 2.0
1.91k stars 175 forks source link

deepspeed MoE all_to_all communication #526

Open miaomiaoma0703 opened 2 months ago

miaomiaoma0703 commented 2 months ago

How can I measure the all-to-all communication time in the MoE model during MoE models like Qwen1.5-MoE-A2.7B inference via DeepSpeed?