Hello, I am deeply interested in recommendation systems, especially DLRM.
As far as I know, embedding vector dimensions vary from model to model, where vector dimension ranges from 64B to 512B, depending on the model configuration.
Is there any reason for this wide range of dimensionality? I guess that this has to do with providing overall flexibility to the model configuration, as embedding table entries are increasing these days for better model quality. Increasing the vector dimension by more than 512B could introduce extra overhead to the model in this situation. Is this the main reason for various vector dimensions?
Hello, I am deeply interested in recommendation systems, especially DLRM. As far as I know, embedding vector dimensions vary from model to model, where vector dimension ranges from 64B to 512B, depending on the model configuration. Is there any reason for this wide range of dimensionality? I guess that this has to do with providing overall flexibility to the model configuration, as embedding table entries are increasing these days for better model quality. Increasing the vector dimension by more than 512B could introduce extra overhead to the model in this situation. Is this the main reason for various vector dimensions?