Closed YoungjaeDev closed 7 months ago
@youngjae-avikus sorry, I have not had a chance to look at this.. give me a couple of days to get back to you.
@youngjae-avikus sorry, I have not had a chance to look at this.. give me a couple of days to get back to you.
Yes, it's a curious topic, so let's discuss it again in a week
@youngjae-avikus, @HappyKerry, my apologies for taking so long to cover this topic...
As I'm sure you know, the Custom Pad Probe Handler will give you access to each buffer flowing over the Parent Component's sink or source pad.
@youngjae-avikus, I recently added the service dsl_source_pph_add
to provide access before the Streammuxer
When it comes to actually working with buffers, the only experience I have so far is my work with the ODE Capture Actions where I copy/transform the buffer and then covert it to an image. This work is very similar to the gstdsexample and uses NVBUF_MEM_CUDA_UNIFIED
on dGPU. I'm not aware if there is any other way to do this. For what it's worth, my source code for this is here.
@HappyKerry, if you're using Python and the pyds module you will need to ensure that the buffers are in RGBA format. This can be done by adding the custom pph to the sink-pad of the OSD component.... or explicitly set the buffer-out-format for your source(s) to RGBA with dsl_source_video_buffer_format_set.
I need to research to see if NVIDIA has any Python examples that work with the actual buffer. Robert.
I need to research to see if NVIDIA has any Python examples that work with the actual buffer. Robert.
Any updates here?
@itershukov I believe that @youngjae-avikus has made significant progress on this. You might consider joining our Discord server were you can find him as @yyyyyyyyyyy
.
hello As in the question, I would like to apply functions provided by opencv, such as blur, equalizehist, distortion correction, etc., through opencv functions and transmit them to streammux or pgie. However, if I use the opencv function, it seems that I have to move the memory to the cpu level, but I want to process it a little faster by working on the memory shared by the cpu and gpu instead of like this. Is it possible??
Below is the feeling of the same concept provided by nvidia, but it seems to be faster by sharing memory in
NVBUF_MEM_CUDA_UNIFIED.
Of course I may have misunderstoodhttps://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_sample_custom_gstream.html
Is there any dsl service that is already being provided that I am not aware of? thank you