laekov / fastmoe

A fast MoE impl for PyTorch
https://fastmoe.ai
Apache License 2.0
1.57k stars 189 forks source link

Detailed documentation about model parallelism #214

Open ZSL98 opened 1 month ago

ZSL98 commented 1 month ago

Hi, I am trying to use the model parallelism feature but I found the documentation is really unclear. /doc/parallelism/README.md say that the code is in the adapter, then where exactly is the code? Or could you please answer my problem below? Thank you very much!

For example, when the world_size and global expert num are both 8, the basic expert parallelism method is like:

        fastermoe = FMoETransformerMLP(num_expert=1, 
                                    d_model=config.hidden_size, 
                                    d_hidden=config.ffn_hidden_size,
                                    world_size=8,
                                    top_k=2)
        y = fastermoe(x)

Now I want to change it to TP=8, which means each expert is split to 8 slices. What is the easiest way to realize it? I just want to do forwarding. What about TP=4?