I have tested darknet on google cloud machines with one of the gpu's:
V100
T4
P100
P4
K80
cudnn7 with cuda 10.1 works on all GPU's ( probably same for 10.2 )
cudnn8 ( 8.05 ) with cuda 10.1 ; or cudnn8 ( 8.1) with cuda 11.2; causes an error in validation.
The error is only in T4 and V100, the rest works fine.
The error is:
cuDNN Error: CUDNN_STATUS_BAD_PARAM in convolutional_kernels.cu : forward_convolutional_layer_gpu()
It seems that there is a specific bug with cudnn8 support regardless of the cuda version, for specific gpu's.
It could be that is bug for all turing and volta architectures.
I tried to test darknet on gpu T4
cudnn8 ( 8.1) with cuda 11.2
also causes an error in validation
cuDNN Error: CUDNN_STATUS_BAD_PARAM in convolutional_kernels.cu : forward_convolutional_layer_gpu()
I have tested darknet on google cloud machines with one of the gpu's:
V100 T4 P100 P4 K80
cudnn7 with cuda 10.1 works on all GPU's ( probably same for 10.2 ) cudnn8 ( 8.05 ) with cuda 10.1 ; or cudnn8 ( 8.1) with cuda 11.2; causes an error in validation. The error is only in T4 and V100, the rest works fine. The error is:
cuDNN Error: CUDNN_STATUS_BAD_PARAM in convolutional_kernels.cu : forward_convolutional_layer_gpu()
It seems that there is a specific bug with cudnn8 support regardless of the cuda version, for specific gpu's. It could be that is bug for all turing and volta architectures.