ShichenLiu / CondenseNet

CondenseNet: Light weighted CNN for mobile devices
MIT License
694 stars 131 forks source link

Testing on ARM without CUDA and GPU #37

Closed LeighDavis closed 3 years ago

LeighDavis commented 3 years ago

I am aware that you have tested CondenseNet model with PyTorch on CPU (which is an ARM processor) of Jetson TX2 which has Nvidia CUDA support.

However, can this model be tested with PyTorch on a ARM CPU/system without CUDA support i.e. using only CPU resources? We have NXP BlueBox 2.0 and it does not support CUDA.

At the moment, I am getting this error on my non-CUDA system: RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.

After I add map_location=torch.device('cpu') to torch.load I get this error: RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu

When I run the base script on a GPU machine with Nvidia CUDA, model testing runs without any issue.