I notice that the embedding_user and embeddingitem are initialized by torch.nn.init.normal, and there is a choice that whether we use pretrained weight or not.
In my dataset, the recommendation results are strongly correlated with the item names. And I want to use BERT to get word embeddings so that similar item names have similar embedding vectors.
So can I use word embeddings for pretrained user&item weight to get better performance?
I notice that the embedding_user and embeddingitem are initialized by torch.nn.init.normal, and there is a choice that whether we use pretrained weight or not. In my dataset, the recommendation results are strongly correlated with the item names. And I want to use BERT to get word embeddings so that similar item names have similar embedding vectors. So can I use word embeddings for pretrained user&item weight to get better performance?