Closed freepoet closed 3 years ago
Hi @freepoet, sorry for not getting back to you! Yes it should support this naturally as the classes all inherit from torch.nn.Module
did you manage to get it working?
I have the same problem. When I want to use DWTForward as down-sampling layer inside of my model, the resultant output would remain on the cpu! While, I have used the pytorch Dataparallel to move the entire of the model on multiple GPUs. Any suggestion?
Hi,fbcotter. I'm very interested in your dissertation and this github repo. I met a problem when I called DTCWTForward(). Can DTCWTForward() support training on multiple GPUs. I want to _import pytorchwavelets , and then train CNN on 8 GPUS . Thx!!!