Closed SeeRich closed 12 months ago
@SeeRich IIRC it was because I only supported getting data from incoming NVMM gstreamer buffers (for gstCamera/gstDecoder), but not allocating my own outgoing NVMM buffers (for gstEncoder). Anyways, that API changed and was removed from future versions of JetPack, and on those I just disable it all together (for multi-stream applications where it could have an appreciable impact, I'd recommend using DeepStream anyways as jetson-utils is primarily to be easy and 'fast enough')
@dusty-nv, thanks, I will take a look at deepstream.
Hey! Im working with a Jetson Nano 4GB and trying to squeeze out as much performance as possible. I’m trying to understand the memcpy from CUDA YUV to gstbuffer in the
encodeYUV
function. Why is this memcpy necessary? It seems to be followed bynvvidconv name=vidconv ! video/x-raw(memory:NVMM)
element? Could we use nvbuf_utils to allocate the memory for the last cuda output from RGB to YUV then something likegst_buffer_append_memory
to avoid the memcpy? I feel like I may be completely missing something though.