I have tried to reduce the batch_size to 2, but I can only run a few of epochs, and I also tried to add th.cuda.empty_cache() in your code.
I run this: python ./model/Weibo/BiGCN_Weibo.py 100
and my envirment is :
GPU : RTX3080.
pytorch 1.4.0
CUDA10.1
I have tried to reduce the batch_size to 2, but I can only run a few of epochs, and I also tried to add th.cuda.empty_cache() in your code. I run this: python ./model/Weibo/BiGCN_Weibo.py 100
and my envirment is : GPU : RTX3080. pytorch 1.4.0 CUDA10.1