alpa-projects / alpa

Training and serving large-scale neural networks with auto parallelization.
https://alpa.ai
Apache License 2.0
3.08k stars 357 forks source link

Check failed: operand_dim < ins->operand(0)->shape().rank() (2 vs. 2)Does not support this kind of Gather. #958

Open caixiiaoyang opened 1 year ago

caixiiaoyang commented 1 year ago

Please describe the bug Aborted (core dumped) Please describe the expected behavior I have two A100 GPUs, when I use alpa.PipeshardParallel(), the model runs fine, when I use alpa.ShardParallel(), I get a core dumped. This error occurs during the auto_sharding process. Check failed: operand_dim < ins->operand(0)->shape().rank() (2 vs. 2)Does not support this kind of Gather.I would like to know under what circumstances this error occurs. Can you provide some troubleshooting ideas and specific errors, as shown in the screenshot below? Screenshots image

Code snippet to reproduce the problem

Additional information Add any other context about the problem here or include any logs that would be helpful to diagnose the problem.

zigzagcai commented 1 year ago

I also met this issue when trying to use alpa.ShardParallel() or alpa.PipeshardParallel() to auto parallelize my llama model.

image

caixiiaoyang commented 11 months ago

I also met this issue when trying to use alpa.ShardParallel() or alpa.PipeshardParallel() to auto parallelize my llama model.

image

I also encountered this problem in the process of parallelizing llama. Is your problem solved?