RidgeRun / gst-inference

A GStreamer Deep Learning Inference Framework
GNU Lesser General Public License v2.1
122 stars 29 forks source link

Loading a TYv2 model in a TYv3 backend segfaults #281

Open michaelgruner opened 4 years ago

michaelgruner commented 4 years ago

I know this is a dumb and erroneous use case, but we should error out instead of segfaulting.

This is an example pipeline:

gst-launch-1.0 avfvideosrc ! video/x-raw ! inferencefilter filter-class=-1 ! inferencedebug name=before ! videoconvert ! tee name=tee tee. ! queue max-size-buffers=3 ! arch.sink_bypass tee. ! queue max-size-buffers=3 ! inferencecrop enable=false ! videoscale ! arch.sink_model arch.src_bypass ! queue ! inferencedebug name=after ! inferenceoverlay ! glimagesink sync=false tinyyolov3 name=arch model-location=graph_tinyyolov2_tensorflow.pb labels='(null)' backend::input-layer=input/Placeholder backend::output-layer=add_8  --gst-debug=2,*inference*:9