james-oldfield / muMoE

[arXiv'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization
http://james-oldfield.github.io/muMoE
16 stars 1 forks source link

Is it suitable for autoregressive model? #1

Closed jiangsongtao closed 2 months ago

jiangsongtao commented 2 months ago

Thanks for the great work! I have a question about :Is this kind of MoE suitable for autoregressive model?

james-oldfield commented 2 months ago

Hi Eric, thanks a lot for your interest in our work!

Whilst we didn't focus on autoregressive models/LLMs in the pre-print, as far as model form goes, MMoEs are absolutely a viable alternative layer choice anywhere you already perform conditional computation through a regular MoE.

We had very promising results from initial experiments training 124M param GPT2 models from scratch (for next token prediction) with CPMMoEs--replacing all MLP's linear layers. The only caveat is that you probably want to use LayerNorm rather than BatchNorm1d when generating the expert coefficients, given the variable input length.

Hope this is helpful -- do let us know how you get on :)

jiangsongtao commented 2 months ago

Thanks so much for your detailed reply!

I still have a question about: Is it possible to load a pretrained mlp as a parameter and add it to CPMMoEs?

james-oldfield commented 2 months ago

The MMoE layers do not support converting pre-trained MLPs' weights to MoEs, but that is a very interesting direction for future research!

jiangsongtao commented 2 months ago

Thanks!