Open dhaval-zala-aivid opened 2 years ago
You can probably use jpegenc and multifilesink to save .jpg images, ex
! jpegenc ! multifilesink location=img_%06d.jpg
@mikhail-nikolskiy
I want to save jpg based on post-processing. I don't want to save each frames that's why I have used ImageCapture class in pipeline to save the particular snapshots only. What you have suggested will capture the image every frame is not required in my case.
There seems to be an interaction between the x264enc and the queue that stalls the pipeline (in my local experiments). Using vaapi elements I was able to get the output correctly. You may try putting a queue after the second t branch. The other thing to note is if you name the splitmuxsink element as splitmuxsink - pipeline server will set the location automatically to include the timestamp for the recording.
attaching the pipeline.json file and save_jpeg.py file for reference.
{
"type": "GStreamer",
"template": ["{auto_source} ! decodebin",
" ! tee name=t ! queue",
" ! gvadetect model={models[object_detection][person_vehicle_bike][network]} name=detection",
" ! gvametaconvert name=metaconvert ! jpegenc ! gvapython module=/home/pipeline-server/save_jpeg.py",
" ! appsink name=appsink",
" t. ! queue ! vaapipostproc ! vaapih264enc ! splitmuxsink name=splitmuxsink muxer=avimux max-size-time=30000"
],
"description": "Person Vehicle Bike Detection based on person-vehicle-bike-detection-crossroad-0078",
"parameters": {
"type": "object",
"properties": {
"detection-properties": {
"element": {
"name": "detection",
"format": "element-properties"
}
},
"detection-device": {
"element": {
"name": "detection",
"property": "device"
},
"type": "string",
"default": "{env[DETECTION_DEVICE]}"
},
"detection-model-instance-id": {
"element": {
"name": "detection",
"property": "model-instance-id"
},
"type": "string"
},
"inference-interval": {
"element": "detection",
"type": "integer"
},
"threshold": {
"element": "detection",
"type": "number"
},
"recording_prefix": {
"type":"string",
"default": "/home/pipeline-server"
}
}
}
}
from gstgva.util import gst_buffer_data
import gi
gi.require_version("Gst", "1.0")
# pylint: disable=wrong-import-position
from gi.repository import Gst
count = 0
def process_frame(frame):
global count
count +=1
buffer = frame._VideoFrame__buffer
with gst_buffer_data(buffer,Gst.MapFlags.READ) as data:
filename = "frame-{}.jpeg".format(count%10)
with open(filename,"wb",0) as output:
output.write(data)
return True
@nnshah1
How to add jpegenc (to generate snapshot) and splitmuxsink (to generate video) together in a single pipeline? I am able to add jpegenc or splitmuxsink separately in pipeline and its running properly. But if I add both together in a single pipeline Its generate snapshot only and video with 0 byte in size. I have tried multiple combination for this issue. But not able to solve.
Here is the pipeline: