dlstreamer / pipeline-server

Home of Intel(R) Deep Learning Streamer Pipeline Server (formerly Video Analytics Serving)
BSD 3-Clause "New" or "Revised" License
126 stars 51 forks source link

gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed #2

Closed kanagala388 closed 5 years ago

kanagala388 commented 5 years ago

I installed the gst plugins from the repo and when i ran my application ,getting the following error messages GStreamer-CRITICAL **: 04:34:45.143: gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed

Can you please provide some help to resolve this issue?

cgdougla commented 5 years ago

Thanks for reaching out. Can you give me some more information so I can try to help resolve this? What elements did you install? What are you trying to do with your application (i.e. using an rtsp/file/... source you want to do object detection/face detection/...? Can you provide the pipeline you are trying to run?

kanagala388 commented 5 years ago

Built an image using Dockerfile.gst . Built my application( which is also a container) by taking the base image as video_analytics_serving image as shown below

ARG base_name=video_analytics_serving_gstreamer FROM ${base_name}

I am trying to do person detection application.

Please find the pipeline i am trying to run { "name": "person_detection", "version": 1, "type": "GStreamer", "template": "urisourcebin uri=\"{source[uri]}\" ! concat name=c ! decodebin ! video/x-raw ! videoconvert name=\"videoconvert\" ! gvadetect inference-id=inf0 model=\"{models[person_detection][1][network]}\" model-proc=\"{models[person_detection][1][proc]}\" name=\"detection\" ! gvametaconvert converter=json method=detection source=\"{source[uri]}\" name=\"jsonmetaconvert\" ! tee name=t ! queue ! gvawatermark ! videoconvert ! autovideosink t. ! queue ! appsink name=appsink", "description": "Person Detection Pipeline", "parameters": { "every-nth-frame": { "element": "detection" }, "cpu-streams": { "element": "detection" }, "n-threads": { "element": "videoconvert" }, "nireq": { "element": "detection" } } }

Here the observation is, pop up window is opening but unable to see the video content. And the docker log contains the many lines of same following error message. gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed**

@cgdougla Also can you please provide me with the details of root cause of this issue. gst_buffer_resize_range: assertion 'bufmax >= bufoffs + offset + size' failed

cgdougla commented 5 years ago

I copied your pipeline and ran it with R1 of person-detection-retail-0013 from OpenVino open_model_zoo and I cannot reproduce this behavior. Are there any other steps that you took or changes that you made? Are you using R1 of your models?

kanagala388 commented 5 years ago

@cgdougla Thanks for your response. The issue is resolved.

kanagala388 commented 5 years ago

Thanks for your response. The issue is resolved.

JaydonChion commented 4 years ago

@kanagala388 i have the same error, can i know how did you solve the error? Thanks

nnshah1 commented 4 years ago

@JaydonChion Can you provide details on the pipeline and example media file you are seeing this issue with?

JaydonChion commented 4 years ago

@nnshah1 , i am running an example based on a deepstream's sample (deepstream-5.0/sources/apps/sample_apps/deepstream-image-decode-test)

pipeline: filesrc -> jpegparse -> nvv4l2decoder -> nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer

the input is a 224x111x3 jpeg.

input dimension of the model is 3x960x540

nnshah1 commented 4 years ago

Is there a model from the open model zoo that has a similar topology?

Generally speaking, the equivalent pipeline would be something like:

filesrc -> decodebin -> gvadetect -> gvawatermark -> ximagesink

JaydonChion commented 4 years ago

@nnshah1 Thank you for your suggestion. The pipeline i described above is used in the nvidia deepstream's example. I will try your suggestion. Can i also check with you is there any Gstreamer plugin i can directly include in the pipeline to resize the input image from an arbitrary size to 3x640x640?

nnshah1 commented 4 years ago

@JaydonChion videoconvert or vaapipostproc can both resize an input image based on the required caps. However if the goal is to resize the image for inference, gvadetect will automatically resize the image based on the requirements of the model - so this does not need to be done in the pipeline itself.

Also - I believe this to be an independent issue from the one originally reported here. Can we open a new issue if there is more help / guidance needed. Perhaps this could be "Converting from a Deepstream pipeline to an equivalent DL Streamer pipeline"