I could optimized model 'ssd_inception_v2_coco_2017_11_17' using main.py script and saved the engine file for TensorRT inference.
Now, for using this model in deepstream test app, I am providing the engine file in parameter file of the application as 'model-engine-file' along with 'libflattenconcat.so' as 'custom-lib-path'.
Issue is : Application starts and runs the video on sink but detection boxes are not there even if there are so many cars in the sample video provided in DeepStream SDK. One error is continously being printed - 'Error: Could not find coverage layer while parsing output'.
By default, deepstream only supports resnet parsers, so If I use SSD then I think I have to provide also function name in 'parse-bbox-func-name' if I use custom plugin for error but I have no source of this libflattenconcat.so, can you please provide the function name so that application can parse the outputs.
I could optimized model 'ssd_inception_v2_coco_2017_11_17' using main.py script and saved the engine file for TensorRT inference.
Now, for using this model in deepstream test app, I am providing the engine file in parameter file of the application as 'model-engine-file' along with 'libflattenconcat.so' as 'custom-lib-path'.
Issue is : Application starts and runs the video on sink but detection boxes are not there even if there are so many cars in the sample video provided in DeepStream SDK. One error is continously being printed - 'Error: Could not find coverage layer while parsing output'.
By default, deepstream only supports resnet parsers, so If I use SSD then I think I have to provide also function name in 'parse-bbox-func-name' if I use custom plugin for error but I have no source of this libflattenconcat.so, can you please provide the function name so that application can parse the outputs.