Open jhonroxton opened 1 year ago
Thanks for your attention. Do you mean Figure 3 in our paper? For the t-SNE visualization, please refer to this link: https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering/blob/main/dgc/visualization/visualization.py.
Thanks for your attention. Do you mean Figure 3 in our paper? For the t-SNE visualization, please refer to this link: https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering/blob/main/dgc/visualization/visualization.py.
Thanks for reply, Figure 2: 2D t-SNE visualization of seven methods on two datasets. in readme.md, this is my code: `plt.scatter(center.detach().numpy()[:, 0], center.detach().numpy()[:, 1], c='red', marker='x', label='Cluster Center') plt.scatter(X[:, 0], X[:, 1], c=y, cmap='viridis', marker='o', label='Samples') plt.xlabel('Feature 1') plt.ylabel('Feature 2') plt.title('Cluster Centers and Samples') plt.legend()
# Save the figure to './pic' directory
plt.savefig(f'pic/epoch_{epoch}.png')
# Close the figure to release memory
plt.close()`
Thanks for sharing your code. And our visualization code is at t_sne function in https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering/blob/main/dgc/visualization/visualization.py.
visualization
Thanks for sharing your code. And our visualization code is at t_sne function in https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering/blob/main/dgc/visualization/visualization.py.
i got it, thanks!
怎么没有划分训练测试集呢。。。
聚类里面所有的样本都是没有标签的,聚类任务需要将所有的相似的样本划分到同一个簇下,不相似的样本划分到不同的簇下。所以没有划分训练集和测试集,而是对所有的数据进行操作。
The learnable trade-off α which is set to 0.99999 as initialization and you said it reduces to around 0.4. But in my experiments of cora dataset with default parameters, it reduce to -1.4 in 1000 epochs, which also occurs in other dataset and parameters, is it resonable?
Thanks for your attention. In our paper, we set the epoch to 400, and we observed that the learnable trade-off alpha reduced to around 0.4, as shown in Figure 4 of the Appendix.
In your experiment, you set the epoch to 1000, and interestingly, the learnable trade-off alpha reduced to -1.4. This phenomenon could be attributed to overfitting. It is worth exploring the reasons behind this and finding potential solutions. If you plan to train the networks for 1000 epochs, you may consider tuning the initial value of the trade-off alpha or adjusting the learning rate.
Certainly, we can suggest some strategies to control the trade-off parameter. One approach is to make the parameter trainable initially and then make it untrainable after a certain number of epochs. This can be achieved by implementing a gradual freezing mechanism, where the trade-off parameter starts as a trainable variable and gradually transitions to an untrainable state. By doing so, you can allow the model to learn an optimal trade-off during the initial training phase and then fix it to ensure stability and prevent overfitting. Experimenting with different freezing strategies and monitoring their impact on the model's performance would be valuable for finding the most effective approach.
how to plot fig2