VincLee8188 / GMAN-PyTorch

Implementation of Graph Muti-Attention Network with PyTorch
134 stars 30 forks source link

Graph Multi Attention Network PyTorch Implementation with GPU Utilization #8

Open muratbayrktr opened 2 years ago

muratbayrktr commented 2 years ago

GPU Utilization

In the previous version of the code, Torch was only using torch.Tensor.float. This caused computations to be done on CPU and therefore training times were longer. This newer version contains adjustments with cuda implementation to utilize GPU with torch.cuda.Tensor.float.

Related Issues: #6

Time Slot Change

  • [ ] utiils_py: Needs more attention. time & frequency calculation approach should be reviewed.