google-research / vmoe

Apache License 2.0
539 stars 47 forks source link

Here are some questions about soft MoE #176

Open t5862755 opened 3 weeks ago

t5862755 commented 3 weeks ago
  1. According to theory, image will transform to token(patch) first, and will become slots by weight then. I would like to know, in image aspect, where the program part will offer the segmentation of origin image, in order to let image become tokens. For example, if we have the image for 32x32, we set sequence length to 16 (meaning that we have 16 slots), and we set experts to 16 too. But the image just transform to slots directly, we don't see the transformation from image to tokens in program. Shortly, tokens only depend on every pixel in original image, not depend on the patch segemented by original image.

  2. I would like to know what loss function and optimizer you guys often to use with soft MoE, because we want to train some data (about 50000 images) with soft MoE in 4090*2.

jpuigcerver commented 2 weeks ago
  1. I'm not sure if I fully understand the question, but the 2D image is transformed into tokens in the main ViT architecture (https://github.com/google-research/vmoe/blob/c0220ef7f52e0697acfa5214d984de8fa36cad7f/vmoe/nn/vit_moe.py#L337 and line 348 in the same file). It has nothing to do with Soft MoEs.

  2. We typically use cross-entropy, and Adam. I've never trained an MoE with so few images. MoEs are especially useful when you are parameter-bounded, not data-bounded.