weiwang1992 / KFGAN

2 stars 0 forks source link

The data results show #1

Open wangbh917 opened 2 months ago

wangbh917 commented 2 months ago

"The data results show a decreasing trend from high to low and then stabilize."

weiwang1992 commented 2 months ago

"The data results show a decreasing trend from high to low and then stabilize."

In the book dataset, we obtain the best auc and F1 at epoch7. in the music dataset, we obtain the best result at epoch2 due to its smaller dataset and more rapid training convergence. And corresponding to the movie dataset, we achieved the best results at epoch19, due to its huge data volume. I'm not quite sure why you would say that The data results show a decreasing trend from high to low and then stabilise, at least in my experiments, the model performance goes up, then stabilises, and then decreases.

wangbh917 commented 2 months ago

Hello, after applying the parameters from your paper, the results initially improved, and the F1 score on the music dataset rose to 0.77. However, later results started to decline, though not continuously; the F1 score dropped to around 0.75 and then leveled off. If possible, could you provide your specific parameter configurations?

weiwang1992 commented 2 months ago

main.py

parser = argparse.ArgumentParser() parser.add_argument('-d', '--dataset', type=str, default='music', help='which dataset to use (music, book, movie,food)') parser.add_argument('--n_epoch', type=int, default=50, help='the number of epochs') parser.add_argument('--batch_size', type=int, default=2048, help='batch size') parser.add_argument('--n_layer', type=int, default=3, help='depth of layer') parser.add_argument('--lr', type=float, default=0.004, help='learning rate') parser.add_argument('--l2_weight', type=float, default=1e-4, help='weight of the l2 regularization term') # movie-5 parser.add_argument('--dim', type=int, default=64, help='dimension of entity and relation embeddings')

parser.add_argument('--user_triple_set_size', type=int, default=256, help='the number of triples in triple set of user') parser.add_argument('--user_potential_triple_set_sampling_size', type=int, default=256, help='the number of triples in triple set of user potential set') parser.add_argument('--item_origin_triple_set_size', type=int, default=256, help='the number of triples in triple set of item origin') parser.add_argument('--item_potential_triple_set_sampling_size', type=int, default=256, help='the number of triples in triple set of item')

parser.add_argument('--agg', type=str, default='concat', help='the type of aggregation function (sum, pool, concat)')

parser.add_argument('--use_cuda', type=bool, default=True, help='whether using gpu or cpu') parser.add_argument('--show_topk', type=bool, default=False, help='whether showing topk or not') parser.add_argument('--random_flag', type=bool, default=False, help='whether using random seed or not')

model.py
self.ssl_temp = 0.2 # for softmax self.kge_weight = 1e-6 # for kge_loss

wangbh917 commented 2 months ago

Thank you for your reply. I have achieved the results of your experiment, but I still want to ask why the results decline later on. Thank you very much.