marcoslucianops / DeepStream-Yolo

NVIDIA DeepStream SDK 7.0 / 6.4 / 6.3 / 6.2 / 6.1.1 / 6.1 / 6.0.1 / 6.0 / 5.1 implementation for YOLO models
MIT License
1.45k stars 356 forks source link

Creating custom model causes error. Dynamic input missing dimensions? #461

Closed mgabell closed 1 year ago

mgabell commented 1 year ago

I try to create a custom model. I get no error and wandb tracking shows good progress and stats.

I trained it using ultralytics like this: (API key removed)

from ultralytics import YOLO
from ultralytics import settings

# Update a setting
settings.update({
#    'runs_dir': '/hdd//YoloV8/ultralytics/runs',
#    'datasets_dir':'/hdd/yolov8/datasets',
#    'weights_dir':'/hdd/yolov8/weights',
#    'runs_dir':'/hdd/Development/YoloV8/ultralytics/runs',
#    'batch':'2000'
})

print(settings)

# Load a model
model = YOLO('yolov8n.pt')

# Train the model using the 'HSP_001.yaml' dataset for 2000 epochs
results = model.train(data='./yaml/HSP_001.yaml', epochs=2000, batch=16)

# Evaluate the model's performance on the validation set
results = model.val()

Then use the export function:

import subprocess
import sys
from ultralytics import YOLO
import sys

model = YOLO('/hdd/yolov8/models/HSP/desert-pond-4/weights/best.pt') # pass any model type
success = model.export(format='onnx', dynamic=True)

if(success):
    print("Model converted")
else:
    print("Model was not possible to convert"

I get this warning when running my own created model:

WARNING: Deserialize engine failed because file path: /home/aiadmin/Development/deepstream-yolov8-evaluation/model_b2_gpu0_fp16.engine open error 0:00:02.478703889 13411 0x31f9d000 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() [UID = 1]: deserialize engine from file :/home/aiadmin/Development/deepstream-yolov8-evaluation/model_b2_gpu0_fp16.engine failed 0:00:02.642975753 13411 0x31f9d000 WARN nvinfer gstnvinfer.cpp:677:gst_nvinfer_logger: NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() [UID = 1]: deserialize backend context from engine from file :/home/aiadmin/Development/deepstream-yolov8-evaluation/model_b2_gpu0_fp16.engine failed, try rebuild 0:00:02.643081002 13411 0x31f9d000 INFO nvinfer gstnvinfer.cpp:680:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() [UID = 1]: Trying to create engine from model files WARNING: [TRT]: onnx2trt_utils.cpp:375: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32. ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::127] Error Code 3: API Usage Error (Parameter check failed at: runtime/common/optimizationProfile.cpp::setDimensions::127, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::127] Error Code 3: API Usage Error (Parameter check failed at: runtime/common/optimizationProfile.cpp::setDimensions::127, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) ) ERROR: [TRT]: 3: [optimizationProfile.cpp::setDimensions::127] Error Code 3: API Usage Error (Parameter check failed at: runtime/common/optimizationProfile.cpp::setDimensions::127, condition: std::all_of(dims.d, dims.d + dims.nbDims, [](int32_t x) noexcept { return x >= 0; }) )

Building the TensorRT Engine

WARNING: [TRT]: DLA requests all profiles have same min, max, and opt value. All dla layers are falling back to GPU ERROR: [TRT]: 4: [network.cpp::validate::3088] Error Code 4: Internal Error (images: dynamic input is missing dimensions in profile 0.) Building engine failed

Failed to build CUDA engine ERROR: Failed to create network using custom network creation function ERROR: Failed to get cuda engine from custom library API 0:00:04.856199072 13411 0x31f9d000 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::buildModel() [UID = 1]: build engine file failed 0:00:05.025727559 13411 0x31f9d000 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::generateBackendContext() [UID = 1]: build backend context failed 0:00:05.025782695 13411 0x31f9d000 ERROR nvinfer gstnvinfer.cpp:674:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::initialize() [UID = 1]: generate backend failed, check config file settings 0:00:05.025871560 13411 0x31f9d000 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Failed to create NvDsInferContext instance 0:00:05.025943689 13411 0x31f9d000 WARN nvinfer gstnvinfer.cpp:888:gst_nvinfer_start: error: Config file path: config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED Error: gst-resource-error-quark: Failed to create NvDsInferContext instance (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(888): gst_nvinfer_start (): /GstPipeline:pipeline0/GstNvInfer:primary-inference: Config file path: config_infer_primary_yoloV8.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED Exiting app

marcoslucianops commented 1 year ago

https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/docs/YOLOv8.md