xiangwang1223 / knowledge_graph_attention_network

KGAT: Knowledge Graph Attention Network for Recommendation, KDD2019
MIT License
1.05k stars 310 forks source link

Some questions about batch_size and some parameters #8

Open Bosen-Zhang opened 5 years ago

Bosen-Zhang commented 5 years ago

Hello, thank you very much for your contribution, I tried to run your example, but due to the gpu problem, the maximum can only use 512 batch_size, then the problem I found is that the result is not better than NFM, the loss drops very slowly. The example is as follows: python Main.py --model_type kgat --alg_type bi --dataset last-fm --regs [1e-5,1e-5] --layer_size [64,32,16] --embed_size 64 --lr 0.001 --epoch 400 --verbose 1 --save_flag 1 --pretrain -1 --batch_size 512 --node_dropout [0.1] --mess_dropout [0.1,0.1,0.1] --use_att True --use_kge True

Are the parameters wrong, and is it affected by batch_size? In addition, the source code loss_type, n_memory and using_all_hops did not find the source, how can I use them?

xiangwang1223 commented 4 years ago

Thanks for your interest. My suggestion is to use matrix factorization (MF) embeddings or KGAT with only one layer to initiate the user and item embeddings in KGAT with three layers.

Actually, KGAT-1 will use much less memory than KGAT-3.

xiangwang1223 commented 4 years ago

Hi @Bosen-Zhang, I have uploaded the MF embeddings. Now you can rerun the model and check whether your results are consistent with ours. Thanks.