I was attempting to have one of the models on 1 gpu, while the other model on another gpu, however, I get a crash. I was setting to dev_ids 0, 1, and I verified nvidia-smi shows multiple gpus. Also verified that gpus work with tensorflow.
This happens after forward is called in action_caffe.py on line
out = self._net.forward(blobs=[score_name,], data=data)
Error:
F0630 03:45:26.391547 1 cudnn_conv_layer.cu:34] Check failed: status == CUDNN_STATUS_SUCCESS (8 vs. 0) CUDNN_STATUS_EXECUTION_FAILED
I was attempting to have one of the models on 1 gpu, while the other model on another gpu, however, I get a crash. I was setting to dev_ids 0, 1, and I verified nvidia-smi shows multiple gpus. Also verified that gpus work with tensorflow. This happens after forward is called in action_caffe.py on line
Error: