I also find this problem in my experiments. However, the training process would not bring lots of time costs since they are all performed on the GPU. However, I observe that the K-means algorithm in scikit-learn package is running on the CPU, leading to higher time costs. Maybe you can move the K-means algorithm to the GPU to speed up DCRN. Some implements of K-means on GPU can be found in the following links.
I also find this problem in my experiments. However, the training process would not bring lots of time costs since they are all performed on the GPU. However, I observe that the K-means algorithm in scikit-learn package is running on the CPU, leading to higher time costs. Maybe you can move the K-means algorithm to the GPU to speed up DCRN. Some implements of K-means on GPU can be found in the following links.
https://github.com/yueliu1999/Awesome-Deep-Graph-Clustering/blob/main/dgc/clustering/kmeans_gpu.py
https://github.com/subhadarship/kmeans_pytorch
https://github.com/NVIDIA/kmeans