Open caixiiaoyang opened 1 year ago
I also met this issue when trying to use alpa.ShardParallel()
or alpa.PipeshardParallel()
to auto parallelize my llama model.
I also met this issue when trying to use
alpa.ShardParallel()
oralpa.PipeshardParallel()
to auto parallelize my llama model.
I also encountered this problem in the process of parallelizing llama. Is your problem solved?
Please describe the bug Aborted (core dumped) Please describe the expected behavior I have two A100 GPUs, when I use alpa.PipeshardParallel(), the model runs fine, when I use alpa.ShardParallel(), I get a core dumped. This error occurs during the auto_sharding process. Check failed: operand_dim < ins->operand(0)->shape().rank() (2 vs. 2)Does not support this kind of Gather.I would like to know under what circumstances this error occurs. Can you provide some troubleshooting ideas and specific errors, as shown in the screenshot below? Screenshots
Code snippet to reproduce the problem
Additional information Add any other context about the problem here or include any logs that would be helpful to diagnose the problem.