Hi,
When I was running benchmark/main.py using --gpu parameter, I found that there is almost no occupation in gpu, and the cpu load is as much as --gpu -1.
Then I tried to move the data to gpu by data.to(device=validate_device(gpu_id=args.gpu)) and different devices errors occurred. After that I found out that because of embedded model structure and temporary defination, most of the parameters remained in cpu while the outer model's device is gpu.
For example, in anomalous' fit method(pygod/detector/anomalous.py), self.model = ANOMALOUSBase(w_init, r_init) makes self.model on cpu, which means loss and its backward calculation are all cpu.
Moreover, the different writings on the same thing at pygod/detector/radar.py line 119-123 and pygod/detector/anomalous.py line 123-127 also makes me confused.
Hi, When I was running benchmark/main.py using --gpu parameter, I found that there is almost no occupation in gpu, and the cpu load is as much as --gpu -1.
Then I tried to move the data to gpu by
data.to(device=validate_device(gpu_id=args.gpu))
and different devices errors occurred. After that I found out that because of embedded model structure and temporary defination, most of the parameters remained in cpu while the outer model's device is gpu.For example, in anomalous' fit method(pygod/detector/anomalous.py),
self.model = ANOMALOUSBase(w_init, r_init)
makes self.model on cpu, which means loss and its backward calculation are all cpu.Moreover, the different writings on the same thing at pygod/detector/radar.py line 119-123 and pygod/detector/anomalous.py line 123-127 also makes me confused.
I'm wondering how to take usage of GPU using pygod and looking forward to the reply.