Open merry-cooperation opened 5 years ago
Hello. Example runs great with freshly built TensorRT engine, but when trying to do inference with cached one I get
Cuda failure: 1 Aborted (core dump)
after
# Transfer input data to the GPU. [cuda.memcpy_htod_async(inp.device, inp.host, stream) for inp in inputs]
in do_inference() at line 281 in inference.py
Deleting .buf file and rebuilding enigine helps.
!Same here..seg fault -at line 281 inference.py ( running GTX 1060 CUDA 10.1 NVIDIA 418.56)
I have the same problem.
any progress solving this issue?
Hello. Example runs great with freshly built TensorRT engine, but when trying to do inference with cached one I get
after
in do_inference() at line 281 in inference.py
Deleting .buf file and rebuilding enigine helps.