⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)
883
stars
46
forks
source link
Data clustering: add tokenization for clustered data, fix training & eval bugs in `moe_gates.py` #24
Closed
Spico197 closed 1 year ago