Closed HoangTienDuc closed 3 years ago
Same model, i am successful run on trition server 20.11. But when i ran on deepstream-trition 5.1.21.2, there is an error related to tag version.
E0716 06:43:17.929970 5269 logging.cc:43] coreReadArchive.cpp (32) - Serialization Error in verifyHeader: 0 (Magic tag does not match) E0716 06:43:17.930068 5269 logging.cc:43] INVALID_STATE: std::exception E0716 06:43:17.930074 5269 logging.cc:43] INVALID_CONFIG: Deserialize the cuda engine failed. W0716 06:43:17.930082 5269 autofill.cc:225] Autofiller failed to detect the platform for retinaface_preprocess (verify contents of model directory or use --log-verbose=1 for more details) W0716 06:43:17.930086 5269 autofill.cc:248] Proceeding with simple config for now I0716 06:43:17.930418 5269 model_repository_manager.cc:810] loading: retinaface_preprocess:1 E0716 06:43:17.941151 5269 model_repository_manager.cc:986] failed to load 'retinaface_preprocess' version 1: Not found: unable to load backend library: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block ERROR: infer_trtis_server.cpp:1044 Triton: failed to load model retinaface_preprocess, triton_err_str:Invalid argument, err_msg:load failed for model 'retinaface_preprocess': version 1: Not found: unable to load backend library: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: cannot allocate memory in static TLS block; ERROR: infer_trtis_backend.cpp:45 failed to load model: retinaface_preprocess, nvinfer error:NVDSINFER_TRTIS_ERROR ERROR: infer_trtis_backend.cpp:184 failed to initialize backend while ensuring model:retinaface_preprocess ready, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:09.727932979 5269 0x4ab4780 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 5]: Error in createNNBackend() <infer_trtis_context.cpp:246> [UID = 5]: failed to initialize trtis backend for model:retinaface_preprocess, nvinfer error:NVDSINFER_TRTIS_ERROR I0716 06:43:17.941406 5269 server.cc:280] Waiting for in-flight requests to complete. I0716 06:43:17.941431 5269 server.cc:295] Timeout 30: Found 0 live models and 0 in-flight non-inference requests 0:00:09.728082338 5269 0x4ab4780 ERROR nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 5]: Error in initialize() <infer_base_context.cpp:81> [UID = 5]: create nn-backend failed, check config file settings, nvinfer error:NVDSINFER_TRTIS_ERROR 0:00:09.728095248 5269 0x4ab4780 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Failed to initialize InferTrtIsContext 0:00:09.728099778 5269 0x4ab4780 WARN nvinferserver gstnvinferserver_impl.cpp:439:start:<primary-inference> error: Config file path: /data/deepstream-retinaface/dstest_ssd_nopostprocess.txt 0:00:09.728476338 5269 0x4ab4780 WARN nvinferserver gstnvinferserver.cpp:460:gst_nvinfer_server_start:<primary-inference> error: gstnvinferserver_impl start failed Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1): gstnvinferserver_impl.cpp(439): start (): /GstPipeline:pipeline0/GstNvInferServer:primary-inference:
How do i fix this problem?
I believe this issue has been resolved in https://github.com/NVIDIA/DALI/issues/3156. Closing
Same model, i am successful run on trition server 20.11. But when i ran on deepstream-trition 5.1.21.2, there is an error related to tag version.
How do i fix this problem?