-
I'm using google colab for training the model. I used gowalla dataset and after a few seconds after calling
model = LightGCN(hparams, data, seed=SEED)
System RAM gets full and my session terminate…
-
hi,dear
I have tried the LightGCN in industrial circles, but when the interaction is big ,such as 3 millions, about 300000 users and 100000 items, the only one GPU will boom,that is, out of the memor…
-
Hi, I run the recommended code
`python main.py --dataset ali --gnn ngcf --dim 64 --lr 0.0001 --batch_size 1024 --gpu_id 0 --context_hops 3 --agg concat --ns mixgcf --K 1 --n_negs 64 `
And receive…
-
Thanks for your hard work and contribution! May I wonder how you create Gowalla dataset? I saw that there are about 6 millions record in total in Gowalla_totalCheckins.txt, rather than 1 million mark…
-
感谢提供这么好的代码库,有几个疑问
1.提供的示例运行代码
`dataset_name = "ml1m"
config = {
"victim_data": dataset.from_config("implicit", dataset_name, need_graph=True),
"attack_data": dataset.from_config("explici…
-
### Description
@wutaomsft suggestion:
> it would be a good discussion point what is preferred way to make references in notebooks. I prefer not to have a"reference" section where references are…
-
你好,请问Amazon数据集的版本是用的哪一年的?
-
## Issue description
Hello.
During training, after replacing a parameter, gradients are not being correctly updated.
Specifically:
There is a fixed tensor adj_mat without gradients.
A m…
ithok updated
7 months ago
-
LightFM, in addition to BPR, implements WARP and k-OS WARP loss. They find that it outperforms BPR https://making.lyst.com/lightfm/docs/examples/warp_loss.html (performance comparison, citations to re…
-
作者您好,对于论文中的一些现象我有点疑惑。在Table4和Table6中,当去掉多头自注意力和残差连接时,都显示训练耗时明显下降了,但是理论上多头自注意力和残差连接应该都没有增加计算复杂度或者引入额外的计算量。想请问下这个现象有什么比较合理的解释吗?