In the previous version of the code, Torch was only using torch.Tensor.float. This caused computations to be done on CPU and therefore training times were longer. This newer version contains adjustments with cuda implementation to utilize GPU with torch.cuda.Tensor.float.
Related Issues: #6
[ + ] main.py: device variable added so that this can be passed as an argument to train(...) and test(...) functions
[ + ] model.py: Spatial and temporal embed inputs with super-class nn.Module have changed to utilize GPU if exists.
[ + ] train.py: Function argument changed so that it now accepts device variable as input and input data such as X, TE, labels are changed with to(device)
[ + ] utils_.py: The line time.freq.delta.total_seconds() was causing problems. Therefore it's changed with args.time_slot*60 which is a more feasible and general approach.
[ ] utiils_py: Needs more attention. time & frequency calculation approach should be reviewed.
GPU Utilization
In the previous version of the code, Torch was only using
torch.Tensor.float
. This caused computations to be done on CPU and therefore training times were longer. This newer version contains adjustments with cuda implementation to utilize GPU withtorch.cuda.Tensor.float
.Related Issues: #6
[ + ] main.py:
device
variable added so that this can be passed as an argument totrain(...)
andtest(...)
functions[ + ] model.py: Spatial and temporal embed inputs with super-class
nn.Module
have changed to utilize GPU if exists.[ + ] train.py: Function argument changed so that it now accepts device variable as input and input data such as
X, TE, labels
are changed withto(device)
[ + ] test.py: Similar changes as train.py
Time Slot Change
time.freq.delta.total_seconds()
was causing problems. Therefore it's changed withargs.time_slot*60
which is a more feasible and general approach.