Whenever I run the pytorch with cuda in the baseline docker, it shows the error:
"RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW"
I use the RTX3090 whose version of the driver is 525 and the cuda version is 12.1 both in my PC and in the docker environment.
Whenever I run the pytorch with cuda in the baseline docker, it shows the error:
"RuntimeError: Unexpected error from cudaGetDeviceCount(). Did you run some cuda functions before calling NumCudaDevices() that might have already set an error? Error 804: forward compatibility was attempted on non supported HW"
I use the RTX3090 whose version of the driver is 525 and the cuda version is 12.1 both in my PC and in the docker environment.
What's wrong?