Closed moowcharnfu closed 2 years ago
@moowcharnfu
pytorch-native-cu113 should be able to run on CADA 11.4. On Linux, there is a symbolic link to libnvrtc-builtins64.so.1.10
.
To workaround this issue on Windows, you can copy nvrtc-builtins64_114.dll
to nvrtc-builtins64_113.dll
@moowcharnfu
pytorch-native-cu113 should be able to run on CADA 11.4. On Linux, there is a symbolic link to
libnvrtc-builtins64.so.1.10
.To workaround this issue on Windows, you can copy
nvrtc-builtins64_114.dll
tonvrtc-builtins64_113.dll
so smart to deal, closed
my os cuda version is cuda 11.4, your lib. automatically download version is 1.11.0-cu113 , not match ths os, will you deal this bug?
cause:
fatal error:
error msg:
ai.djl.engine.EngineException: nvrtc: error: failed to open nvrtc-builtins64_113.dll. Make sure that nvrtc-builtins64_113.dll is installed correctly. nvrtc compilation failed:
define NAN __int_as_float(0x7fffffff)
define POS_INFINITY __int_as_float(0x7f800000)
define NEG_INFINITY __int_as_float(0xff800000)
template
device T maximum(T a, T b) {
return isnan(a) ? a : (a > b ? a : b);
}
template
device T minimum(T a, T b) {
return isnan(a) ? a : (a < b ? a : b);
}
extern "C" global void fused_sigmoid_mul(float tx_10, float aten_mul) { { float tx_10_1 = __ldg(tx_10 + (long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)); aten_mul[(long long)(threadIdx.x) + 512ll (long long)(blockIdx.x)] = tx_10_1 * (1.f / (1.f + (expf(0.f - tx_10_1)))); } }