Marigoldwu / A-Unified-Framework-for-Deep-Attribute-Graph-Clustering

This project is a scalable unified framework for deep graph clustering.
https://www.marigold.website/readArticle?workId=145&author=Marigold&authorId=1000001
MIT License
84 stars 11 forks source link

A bug #9

Open 11051911 opened 2 months ago

11051911 commented 2 months ago

When I brought my own data into the experiment, it basically ran out, but the methods involving cross-entropy loss reported this error. May I ask the author what the problem may be? Have you encountered it before? C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [32,0,0], thread: [126,0,0] Assertiontarget_val >= zero && target_val <= onefailed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [4255,0,0], thread: [62,0,0] Assertiontarget_val >= zero && target_val <= onefailed. File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\main.py", line 53, in <module> result = train(args, data, logger) File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\model\pretrain_gat_for_efrdgc\train.py", line 48, in train loss = F.binary_cross_entropy(A_pred.view(-1), adj_label.view(-1))

Marigoldwu commented 2 months ago

When I brought my own data into the experiment, it basically ran out, but the methods involving cross-entropy loss reported this error. May I ask the author what the problem may be? Have you encountered it before? C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [32,0,0], thread: [126,0,0] Assertiontarget_val >= zero && target_val <= onefailed. C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\cuda\Loss.cu:106: block: [4255,0,0], thread: [62,0,0] Assertiontarget_val >= zero && target_val <= onefailed. File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\main.py", line 53, in <module> result = train(args, data, logger) File "E:\最新聚类论文代码\A-Unified-Framework-for-Deep-Attribute-Graph-Clustering-main\model\pretrain_gat_for_efrdgc\train.py", line 48, in train loss = F.binary_cross_entropy(A_pred.view(-1), adj_label.view(-1))

@11051911 您好!感谢您使用本仓库代码并发现有价值的问题! 我之前确实遇到过类似的问题,而且经常发生在新模型或者新数据集上。 我认为可能的原因是在前向传播过程中,某些张量的值可能是NaN或INF。 您可以通过调试来查看计算出错的地方,并尝试对这些值进行特殊处理,例如用 0 替换 NaN 或用 1 替换 INF(视情况而定)。 如果以上方法不能解决问题,欢迎继续讨论。 我的微信:18751766925。