Open mfoglio opened 2 years ago
Hi @mfoglio I assume this issue is a best practice request.
Since my application is incapable of processing 30 FPS per streams, it wouldn't be an issue if some of the frames would be dropped. I probably won't need more than 5 FPS per streams.
Any time you start optimizing a SW which utilizes NVIDIA HW, a good place to start is nvidia-smi dmon
CLI utility. It will show you the Nvdec, Nvenc, CUDA cores load levels, clocks and much more useful information.
Below you can see a screenshot of application which is clearly bottleneck-ed by CUDA cores performance:
As long as your application isn't limited by Nvdec performance, just decode all video frames one by one and discard frames you don't need.
Also, I don't recommend you to use single-threaded approach if you're aiming for top performance. Split it in 2 threads:
Memory is particularly important for me since I want to decode many streams
I can only help you to track the memory allocations happening in VPF, not in PyTorch.
Thank you @rarzumanyan for the nvidia-smi dmon
tip. I think my application is limited by Nvdec memory (GPU memory) usage rather than by the number of FPS processed by the decoder. I would like to decode about 30 video streams but the consumer can't probably process more than 100-200 FPS per second.
In my actual application I am adopting a thread-safe queue approach like the one you described. For this reason, I would like to know if, while the decoder wait (after it pushed a frame on the queue), it can go into some kind of sleep
or at least drop unnecessary data to free up GPU memory.
But here's the most important point: suppose there is a Queue of size=1
. How can I guarantee that the queue always contains a close-to-real-time frame (e.g. no delays)? The code above returns old frames (from seconds to minutes ago) if the consumer can't keep up with it. I know that one solution from a generic Python perspective would be to have some kind of buffer (that replaces the queue) that always keeps in memory the last frame by discarding the old ones. But this approach seems to wait lots of resources, and more importantly. I would like to know if there is any parameter that would allow to guarantee that the frames returned by the decoder represent the present, not the past.
E.g. in gstreamer there are some parameters that would pause the pipeline (and I guess discard data) if the application can't consume the frames fast enough. This works better compared to the approach where gstreamer decodes frames as fast as possible and discard all of them but the last.
Hi @mfoglio
I think my application is limited by Nvdec memory (GPU memory) usage rather than by the number of FPS processed by the decoder
There's no need to guess, there are lots of VPF profiling options:
nvidia-smi dmon
.USE_NVTX
option and launch it under Nsight Systems to collect application timeline.gprof
and callgrind
to inspect CPU-side performance.How can I guarantee that the queue always contains a close-to-real-time frame
Each frame when decoded has PTS which is presentation timestamp. It increases monotonically and by it's value you can estimate how "fresh" decoded frame is. Take a look at #253 for more information on this topic.
This works better compared to the approach where gstreamer decodes frames as fast as possible and discard all of them but the last.
SW design is a topic of it's own so I can't help you with anything more substantial than advice, but there are ways to mitigate this problem.
E. g. a signal / slot connection between your consumer and producer. Since PyNvDecoder
usually starts RTSP stream decoding not from the beginning and doesn't go all the way till the end (camera is just broadcasting the data over network), your consumer may tell decoder when to start decoding next frame (e. g. when decoded frames queue is close to depletion). It may cause some delays in decoding and / or data corruption but that may happens every time you take a data from network.
Thank you for your detailed response.
Without getting much into details, I can see from the high-level nvidia-smi
that I am using 3298MiB to decode 16 1080p streams. Is there a way to reduce the memory used? I don't need an exact answer. I am just wondering on what parameters I can start to play to do that: the decoder? The demultiplexer?
Hi @mfoglio
Generally, the entry point to any investigation is the same - compile VPF with all diagnostic options possible and use existing CUDA profiling tools.
E. g. Nsight Systems profiler can track all the CUDA API calls, and VPF uses that to allocate memory for video frames. Hence, by looking at the application timeline, you will see exactly what’s happening and when.
Sometimes Nsight struggles to collect application timeline for multithreaded Python scripts so a simpler decoding script (such as one of VPF samples) is probably a good place to start.
Hello @rarzumanyan , could you provide more details about the difference between flushSingleSurface
and decodeSingleSurface
? Does the first one allow me to discard old video frames / data without decoding it? I am still trying to reduce the GPU memory used by the decoding pipeline.
Also, what should I do when I want to delete a decoder? For instance, in the code above, how would you proceed to clean/flush/release all the necessary stuff when you don't need to decode the video anymore?
Hi @mfoglio
Nvdec is async by it’s nature and there’s a delay between encoded frame submission and moment it’s ready for display. This latency is hidden when PyNvDecoder
class is created in builtin mode (with PyFfmpegDemuxer
class within).
However, one can use external demuxer like Gstreamer, PyAV or any other demuxer which produces Annex.B elementary bitstream. In such case, PyNvDecoder
acts asynchronously and after you’re input is over, there are still some frames in Nvdec queue.
FlushSingleSurface
is used to flush one of such frames from queue. Take a closer look at SampleDemuxDecode.py
for reference.
Regarding PyNvDecoder
class deletion - it acts just same as any other Python class. When it’s lifetime is over, it cleans up it resources.
Hello @rarzumanyan , this is really interesting! I really appreciate your help. I have a few things in mind to try, as well as a few other questions... Sorry! And thanks!
Is there any way to set a maximum size for the queue that you mentioned above? This way I could avoid "wasting" GPU memory by keeping in memory frames that I would still drop later on because of a slow consumer. Otherwise I guess I could use a standalone ffmpeg demuxer and I could drop packets until my consumer is ready; at that point I could resume decoding packets using nvDec.DecodeSurfaceFromPacket(packet)
until a valid surface is returned. I am not sure if this would return corrupted frames or if it would just create a small delay (because the decoder would wait until it has a valid frame before returning a surface).
It seems that PyFfmpegDemuxer
can receive a dictionary as its second parameter. I guess this can be used to forward arguments to ffmpeg
. Is this correct? If yes, what are the parameters that can be used? I am not sure what ffmpeg
"object" is actually used byPyFfmpegDemuxer
so I don't know where to look in the ffmpeg documentation.
Thank you, thank you, thank you!
Hi @mfoglio,
Is there any way to set a maximum size for the queue that you mentioned above?
There are 2 places where the memory for decoded surfaces is allocated.
First is decoded surfaces pool size: https://github.com/NVIDIA/VideoProcessingFramework/blob/ba47dcad8c285623c34b013e2a7180402ad0c707/PyNvCodec/inc/PyNvCodec.hpp#L241-L247
You can slightly reduce the memory consumption by changing the poolFrameSize
variable value.
Second is the decoder initialization stage:
GetNumDecodeSurfaces()
function is used to determine how many surfaces are needed for Nvdec to ensure proper DPB operation. It allocates memory a bit generously in some cases but keeps the code simple.
You can get a better estimation of required surfaces amount by going through ff_nvdec_decode_init()
function in libavcodec/nvdec.c
file which is part of FFMpeg. It uses more sophisticated approach for various codecs DPB size determination. I'm not saying it's ideal but it's publicly available and it shows reasonable decoding memory consumption.
It seems that PyFfmpegDemuxer can receive a dictionary as its second parameter. I guess this can be used to forward arguments to ffmpeg. Is this correct? If yes, what are the parameters that can be used? I am not sure what ffmpeg "object" is actually used byPyFfmpegDemuxer so I don't know where to look in the ffmpeg documentation.
VPF accepts a dictionary that is converted to AVDictionary
structure and passed to avformat_open_input()
function which initialized AVFormatContext
structure:
Thanks @rarzumanyan . I can only see the following compatible constructor arguments:
1. PyNvCodec.PyNvDecoder(arg0: int, arg1: int, arg2: PyNvCodec.PixelFormat, arg3: PyNvCodec.CudaVideoCodec, arg4: int)
2. PyNvCodec.PyNvDecoder(arg0: str, arg1: int, arg2: Dict[str, str])
3. PyNvCodec.PyNvDecoder(arg0: str, arg1: int)
4. PyNvCodec.PyNvDecoder(arg0: int, arg1: int, arg2: PyNvCodec.PixelFormat, arg3: PyNvCodec.CudaVideoCodec, arg4: int, arg5: int)
5. PyNvCodec.PyNvDecoder(arg0: str, arg1: int, arg2: int, arg3: Dict[str, str])
6. PyNvCodec.PyNvDecoder(arg0: str, arg1: int, arg2: int)
At the moment I am initializing the decoder with:
# Initialize standalone demuxer.
self.nvDmx = nvc.PyFFmpegDemuxer(encFile) # {"latency": "0", "drop-on-latency": "true"}
# Initialize decoder.
self.nvDec = nvc.PyNvDecoder(
self.nvDmx.Width(), self.nvDmx.Height(), self.nvDmx.Format(), self.nvDmx.Codec(), self.ctx.handle, self.str.handle
)
How can I provide the parameterpoolFrameSize
?
Possible OT: it seems that the demuxer keeps disconnecting from the rtsp stream. The following is a log captured in about a minute:
i-01f5ae3961a12c713 Thread-5 2021-10-25 17:16:19,137 - __main__ - INFO - FPS 0.0
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/l_4116_0_0.ts?nimblesessionid=339' for reading
[AVBSFContext @ 0x6e32e80] Invalid NAL unit 0, skipping.
# other output
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
It ended with:
[hls,applehttp @ 0x6e1a6c0] Opening 'http://localhost:8081/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5/chunks.m3u8?nimblesessionid=339' for reading
[http @ 0x7ff285bdfde0] HTTP error 404 Not Found
However, ffprobe seems to find the stream:
ffprobe rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5
ffprobe version N-104411-gcf0881bcfc Copyright (c) 2007-2021 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared --disable-static --disable-stripping --disable-doc --enable-shared --enable-openssl --enable-network --enable-protocol=tcp --enable-demuxer=rtsp --enable-decoder=h264
libavutil 57. 7.100 / 57. 7.100
libavcodec 59. 12.100 / 59. 12.100
libavformat 59. 6.100 / 59. 6.100
libavdevice 59. 0.101 / 59. 0.101
libavfilter 8. 15.100 / 8. 15.100
libswscale 6. 1.100 / 6. 1.100
libswresample 4. 0.100 / 4. 0.100
[rtsp @ 0x556be1c4ecc0] method SETUP failed: 461 Unsupported transport
Input #0, rtsp, from 'rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/00dd93c3-1236-4c63-8a5f-4c5b452430f5':
Duration: N/A, start: 0.016667, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn
Sometime the demuxer self.nvDmx = nvc.PyFFmpegDemuxer(encFile)
fails directly with:
ValueError: FFmpegDemuxer: no AVFormatContext provided.
If instead I try to initialize the decoder with nvc.PyFFmpegDemuxer(encFile, {"rtsp_transport": "tcp"})
and using an rtsp url instead of the m3u8 I encounter an error Failed to read frame: End of file
after a few seconds, and sometimes a ValueError: Unsupported FFmpeg pixel format
upon initialization.
Hi @mfoglio
I can only see the following compatible constructor arguments: How can I provide the parameter
poolFrameSize
?
This happens because pool size isn't exported to Python land, you have to change the hard-coded value in C++ and recompile VPF. Honestly, queue size was was never exported to Python simply because nobody has ever asked ))
However, ffprobe seems to find the stream:
Reading input from RTSP cameras is the single most painful thing to do. I'd say 90% of user issues are about missing connection and such. There are multiple way of mitigating this, including demuxing with external demuxer (see project's wiki) or PyAV. Unfortunately, required PyAV functionality was never merged to PyAV main branch, so this problem stays half solved.
EDIT: the gstreamer pipeline seems to work. It wasn't because of a typo. The ffmpeg pipeline does not work.
Hello @rarzumanyan , and thanks again for following me through my journey. I tried the example from the wiki without success. The Gstreamer option seems to be stuck without doing anything. FFmpeg does not return any frame.
Code to reproduce the issue:
from components.workers.video.vpf import initialize_vpf
import pycuda.driver as cuda
import ffmpeg
import subprocess
import pycuda.driver as cuda
import numpy as np
import PyNvCodec as nvc
rtsp_url = "rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1"
# Retain primary CUDA device context and create separate stream per thread.
cuda.init()
ctx = cuda.Device(0).retain_primary_context()
ctx.push()
str = cuda.Stream()
ctx.pop()
# Option 1: Gstreamer with pipeline from wiki
pipeline = \
f"rtspsrc location={rtsp_url} " +\
"protocols=tcp ! " + \
"queue ! " + \
"'application/x-rtp,media=video' ! " + \
"rtph264depay ! " + \
"h264parse ! " + \
"video/x-h264, stream-format='byte-stream' ! " + \
"filesink location=/dev/stdout"
proc = subprocess.Popen(
f"/opt/intel/openvino_2021.1.110/data_processing/gstreamer/bin/gst-launch-1.0 {pipeline}",
shell=True,
stdout=subprocess.PIPE
)
# Option 2: FFmpeg (from wiki, not sure if it applies to rtsp streams)
args = (ffmpeg
.input(rtsp_url)
.output('pipe:', vcodec='copy', **{'bsf:v': 'h264_mp4toannexb'}, format='h264')
.compile())
proc = subprocess.Popen(args, stdout=subprocess.PIPE)
# Decoder (parameters taken by tryingg to initialize demuxer multiple times until initialization succeeded)
video_width = 1920
video_height = 1080
video_format = nvc.PixelFormat.NV12
video_codec = nvc.CudaVideoCodec.H264
video_color_space = nvc.ColorSpace.BT_709
video_color_range = nvc.ColorRange.JPEG
# Initialize decoder.
nvDec = nvc.PyNvDecoder(
video_width, video_height, video_format, video_codec, ctx.handle, str.handle
)
print("nvDec")
while True:
# Read 4Kb of data as this is most common mem page size
bits = proc.stdout.read(4096)
if not len(bits):
print("Empty page")
continue
# Decode
packet = np.frombuffer(buffer=bits, dtype=np.uint8)
# Decoder is async by design.
# As it consumes packets from demuxer one at a time it may not return
# decoded surface every time the decoding function is called.
rawSurface = nvDec.DecodeSurfaceFromPacket(packet)
if (rawSurface.Empty()):
print("No more video frames")
continue
print("Surface decoded") # never printed
Output for ffmpeg:
ffmpeg version N-104411-gcf0881bcfc Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared --disable-static --disable-stripping --disable-doc --enable-shared --enable-openssl --enable-network --enable-protocol=tcp --enable-demuxer=rtsp --enable-decoder=h264
libavutil 57. 7.100 / 57. 7.100
libavcodec 59. 12.100 / 59. 12.100
libavformat 59. 6.100 / 59. 6.100
libavdevice 59. 0.101 / 59. 0.101
libavfilter 8. 15.100 / 8. 15.100
libswscale 6. 1.100 / 6. 1.100
libswresample 4. 0.100 / 4. 0.100
[rtsp @ 0x562c6c4e6400] method SETUP failed: 461 Unsupported transport
Input #0, rtsp, from 'rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1':
Duration: N/A, start: 0.016656, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn
Output #0, h264, to 'pipe:':
Metadata:
encoder : Lavf59.6.100
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 60 fps, 60 tbr, 60 tbn
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
No more video frames
No more video frames
No more video frames
...
No more video frames
I am looking for a public rtsp that you can use to replicate on your side (or to check that the code works on some streams on my side).
ffprobe
output:
ffprobe rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1
ffprobe version N-104411-gcf0881bcfc Copyright (c) 2007-2021 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared --disable-static --disable-stripping --disable-doc --enable-shared --enable-openssl --enable-network --enable-protocol=tcp --enable-demuxer=rtsp --enable-decoder=h264
libavutil 57. 7.100 / 57. 7.100
libavcodec 59. 12.100 / 59. 12.100
libavformat 59. 6.100 / 59. 6.100
libavdevice 59. 0.101 / 59. 0.101
libavfilter 8. 15.100 / 8. 15.100
libswscale 6. 1.100 / 6. 1.100
libswresample 4. 0.100 / 4. 0.100
[rtsp @ 0x564cb71e9cc0] method SETUP failed: 461 Unsupported transport
Input #0, rtsp, from 'rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1':
Duration: N/A, start: 0.016667, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn
I could test that the streams works fine with a:
import cv2
from matplotlib import pyplot as plt
cap = cv2.VideoCapture("rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1")
for _ in range(10):
status, frame = cap.read()
plt.imshow(frame)
plt.show()
Also, if I write the ffmpeg args to the console, the console start printing binary data:
ffmpeg -i rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1 -f h264 -bsf:v h264_mp4toannexb -vcodec copy pipe:
@mfoglio
Also, if I write the ffmpeg args to the console, the console start printing binary data
This is expected behavior. If you take a closer look at command line:
ffmpeg -i rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1 -f h264 -bsf:v h264_mp4toannexb -vcodec copy pipe:
It reads data from RTSP source and applies h264_mp4toannexb
bitstream filter. This filter extracts Annex.B elementary video stream from incoming data and puts the output to the pipe which is to be fed to Nvdec by VPF. Basically this is what demuxer does - it demultiplicates incoming data (video, audio, subtitles tracks etc.) into separate data streams. "Pure" video stream is binary data which is formed in special way and it's called Annex.B elementary bitstream because it conforms special binary syntax described in Annex B of H.264 / H.265 video codec standards.
Nvdec HW can't work with video containers like AVI, MKV or any other. It expects Annex.B elementary stream which consists of NAL Units. This is binary input which only contains compressed video without any extra information (like audio track or subtitles) because video codec standards only describes the video coding essentials and don't cover any video containers.
@rarzumanyan yes, I added that comment to confirm that ffmpeg was actually working on my machine. So it seems that ffmpeg returns data but the decoder cannot parse any frame.
In fact, the packet
are not empty (I tried to print them). However, rawSurface = nvDec.DecodeSurfaceFromPacket(packet)
always return an empty surface.
Let's start from something simpler.
Just save your ffmpeg output to local file and decode it with SampleDecode.py
. Need to make sure that RTSP part is the culprit.
I saved about a minute of video using ffmpeg -i rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1 -acodec copy -vcodec copy rtsp_stream.mp4
. I restarted the code after replacing the rtsp url with the local file path. It decoded 1566 surfaces.
I used the ffmpeg demuxer and provided in my example above (using a subprocess
).
EDIT: not sure if it's useful,. but here's the output of ffmpeg when saving the video (I stopped it with a Ctrl + C):
ffmpeg -i rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1 -acodec copy -vcodec copy rtsp_stream.mp4
ffmpeg version N-104411-gcf0881bcfc Copyright (c) 2000-2021 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared --disable-static --disable-stripping --disable-doc --enable-shared --enable-openssl --enable-network --enable-protocol=tcp --enable-demuxer=rtsp --enable-decoder=h264
libavutil 57. 7.100 / 57. 7.100
libavcodec 59. 12.100 / 59. 12.100
libavformat 59. 6.100 / 59. 6.100
libavdevice 59. 0.101 / 59. 0.101
libavfilter 8. 15.100 / 8. 15.100
libswscale 6. 1.100 / 6. 1.100
libswresample 4. 0.100 / 4. 0.100
[rtsp @ 0x5571f2668440] method SETUP failed: 461 Unsupported transport
Input #0, rtsp, from 'rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1':
Duration: N/A, start: 0.016667, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn
Output #0, mp4, to 'rtsp_stream.mp4':
Metadata:
encoder : Lavf59.6.100
Stream #0:0: Video: h264 (High) (avc1 / 0x31637661), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 60 fps, 60 tbr, 90k tbn
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Press [q] to stop, [?] for help
[mp4 @ 0x5571f2673340] Timestamps are unset in a packet for stream 0. This is deprecated and will stop working in the future. Fix your code to set the timestamps properly
[mp4 @ 0x5571f2673340] Non-monotonous DTS in output stream 0:0; previous: 0, current: 0; changing to 1. This may result in incorrect timestamps in the output file.
frame= 1568 fps= 25 q=-1.0 Lsize= 69677kB time=00:01:01.38 bitrate=9299.0kbits/s speed=0.962x
video:69668kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.013563%
Exiting normally, received signal 2.
@rarzumanyan I can confirm that the gstreamer pipeline works. There was a typo (missing space). The FFmpeg pipeline does not work but that's not an issue as long as gstreamer works.
Going back to the memory optimization, can instances of PySurfaceConverter
and PySurfaceResizer
be shared among multiple video and cuda streams safely? I guess I might have to fight a little bit against cuda streams to be able to share data among cuda streams. But can it theoretically be done or do the objects have internal attributes that would not allow to do that safely?
Hi @mfoglio
can instances of
PySurfaceConverter
andPySurfaceResizer
be shared among multiple video and cuda streams safely?
There are 2 types of constructors for most VPF classes which use CUDA:
1) Those which accept GPU ID. In such case, CudaResMgr
will provide class constructor with CUDA context and stream:
https://github.com/NVIDIA/VideoProcessingFramework/blob/ba47dcad8c285623c34b013e2a7180402ad0c707/PyNvCodec/src/PyNvCodec.cpp#L396-L404
CudaResMgr
retains primary CUDA context for any given device and creates a single CUDA stream for any given device. So all VPF classes which are instantiated with same GPU ID will share the same context (primary CUDA context for given GPU ID) and same CUDA stream (not default CUDA stream, but one created by CudaResMgr
).
2) Those which take given CUDA context and stream as constructor arguments: https://github.com/NVIDIA/VideoProcessingFramework/blob/ba47dcad8c285623c34b013e2a7180402ad0c707/PyNvCodec/src/PyNvCodec.cpp#L406-L414
In such case, given CUDA context and stream references will be saved within class instance and used later on when doing CUDA stuff.
You can either rely on CudaResMgr
and pass GPU ID or provide context and stream explicitly for more flexibility using pycuda. Both options are illustrated in samples:
https://github.com/NVIDIA/VideoProcessingFramework/blob/ba47dcad8c285623c34b013e2a7180402ad0c707/SampleDemuxDecode.py#L47-L59
Choose option you find most suitable. One option is not better or worse than another, they are just different.
To my best knowledge, you shall have no issues using CUDA memory objects created in single context in operations submitted to different streams. Speaking in VPF terms: you can pass Surface
and CudaBuffer
to VPF classes which use different streams if those Surface
and CudaBuffer
were created in same CUDA context.
As a rule of thumb, I recommend to use single CUDA context per one GPU and retain primary CUDA context instead of creating your own. This is illustrated in SampleDemuxDecode.py
. If you're aiming at minimizing the memory consumption, I also don't recommend you to create any additional CUDA contexts as there are some driver-internal objects stored in vRAM associated with each active context.
But what do you think, besides the CUDA streams? For instance, if they run asynchronously I'll probably need to put a thread lock around them to prevent the possibility of having surfaces switched across consumers. Also, I am not sure if a call to an instance of PySurfaceConverter
and PySurfaceResizer
is affected by a previous call to the same objects: for instance you wouldn't be able to feed multiple video to the same decoder because it wouldn't be able to decode frames.
However, taking a step back, I am facing a bigger issue. This is the code that I have so far:
import subprocess
import numpy as np
import PyNvCodec as nvc
rtsp_url = "rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/57a77f83-6fed-4316-a2c9-2c8813a49fe1"
pipeline = \
f"rtspsrc location={rtsp_url} " +\
"protocols=tcp ! " + \
"queue ! " + \
"'application/x-rtp,media=video' ! " + \
"rtph264depay ! " + \
"h264parse ! " + \
"video/x-h264, stream-format='byte-stream' ! " + \
"filesink location=/dev/stdout"
proc = subprocess.Popen(
f"/opt/intel/openvino_2021.1.110/data_processing/gstreamer/bin/gst-launch-1.0 {pipeline}",
shell=True,
stdout=subprocess.PIPE
)
# Decoder (parameters taken by tryingg to initialize demuxer multiple times until initialization succeeded)
video_width = int(1920)
video_height = int(1080)
video_format = nvc.PixelFormat.NV12
video_codec = nvc.CudaVideoCodec.H264
# Initialize decoder.
nvDec = nvc.PyNvDecoder(
video_width, video_height, video_format, video_codec, 0
)
c = 0
while True:
bits = proc.stdout.read(4090)
if not len(bits):
continue
packet = np.frombuffer(buffer=bits, dtype=np.uint8)
rawSurface = nvDec.DecodeSurfaceFromPacket(packet)
if rawSurface.Empty():
continue
print(f"Surface decoded {c}")
c = c + 1
It works but it seems to be affected by a memory leak. I am running the code on a video streams and I have already reached 8 Gb of GPU memory used. And it keeps increasing. As far as I understood, there's no way to fix this using plain Python VPF. Do you see any possible solution? You mentioned this https://github.com/NVIDIA/VideoProcessingFramework/issues/257#issuecomment-950681056 but these possible solutions would require editing C++ code that performs video decoding, which is something unfortunately quite far from my domain knowledge. I am happy to try to dive into it, but I would like to know if you see any possible easier solution.
Update: the code above reached out of memory
error on a T4.
Surface decoded 30061
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvDecoder.cpp:526
CUDA error: CUDA_ERROR_OUT_OF_MEMORY
out of memory
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvDecoder.cpp:526
CUDA error: CUDA_ERROR_OUT_OF_MEMORY
out of memory
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvDecoder.cpp:488
CUDA error: CUDA_ERROR_MAP_FAILED
mapping of buffer object failed
Surface decoded 30062
Cuvid parser faced error.
Not sure if that matters, VPF was compiled with USE_NVTX
.
@mfoglio
It works but it seems to be affected by a memory leak.
Can't reproduce on my machine with SampleDecode.py
or SampleDemuxDecode.py
I run command like this:
python3 ./SampleDecode.py 0 ~/Videos/bbb_sunflower_1080p_30fps_normal.mp4 ./tmp.nv12
Memory consumption during decoding:
python3
uses 195 MB constantly.
Memory consumption when decoding is over:
Same thing with SampleDemuxDecode.py
Why not to do incremental analysis? If SampleDemuxDecode.py
doesn't show memory leaks, add another layer over the top of that and take input from pipeline.
P. S. From what I see in HW description (Tesla T4), looks like you're using VPF in production. Let's establish a contact via email and discuss what could be done. My work email is in profile info. Honestly, to me this thread doesn't look like a VPF issue but rather as SW development consulting, so let's bring it to new level ))
I think you should be able to reproduce the memory leak using this public rtsp stream: rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov and the following code:
import subprocess
import numpy as np
import PyNvCodec as nvc
# rtsp_url = "rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/57a77f83-6fed-4316-a2c9-2c8813a49fe1"
rtsp_url = "rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov"
pipeline = \
f"rtspsrc location={rtsp_url} " +\
"protocols=tcp ! " + \
"queue ! " + \
"'application/x-rtp,media=video' ! " + \
"rtph264depay ! " + \
"h264parse ! " + \
"video/x-h264, stream-format='byte-stream' ! " + \
"filesink location=/dev/stdout"
proc = subprocess.Popen(
f"/opt/intel/openvino_2021.1.110/data_processing/gstreamer/bin/gst-launch-1.0 {pipeline}",
shell=True,
stdout=subprocess.PIPE
)
# Decoder (parameters taken by tryingg to initialize demuxer multiple times until initialization succeeded)
video_width = int(1920)
video_height = int(1080)
video_format = nvc.PixelFormat.NV12
video_codec = nvc.CudaVideoCodec.H264
# Initialize decoder.
nvDec = nvc.PyNvDecoder(
video_width, video_height, video_format, video_codec, 0
)
c = 0
while True:
bits = proc.stdout.read(4090)
if not len(bits):
continue
packet = np.frombuffer(buffer=bits, dtype=np.uint8)
rawSurface = nvDec.DecodeSurfaceFromPacket(packet)
if rawSurface.Empty():
continue
print(f"Surface decoded {c}")
c = c + 1
Let me know if you can reproduce it. I launched the gstreamer pipeline in a console, and, as expected, it does not use any gpu memory. So I would assume the memory leak is caused by vpf. I'll contact you ;)
@mfoglio
I don't have gstreamer installed on my machine and even if I install it with package manager, it's not going to be same as yours: /opt/intel/openvino_2021.1.110/data_processing/gstreamer/bin/gst-launch-1.0
I can decode mentioned rtsp using SampleDemuxDecode.py
with non-volatile GPU memory consumption:
python3 ./SampleDemuxDecode.py 0 rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov ./tmp.yuv
Don't get me wrong with this, but I will not do the job for you. Please isolate the issue and make sure it lies inside VPF.
Thank you for your help!
In order to verify whether it was a VPF issue, I decided to start from a new clean Ubuntu installation.
The Gstreamer pipeline now works without any memory leak.
As for ffmpeg
it does not work with every rtsp stream (it gets stuck for some), but I found out a possible fix. The example in the wiki ( https://github.com/NVIDIA/VideoProcessingFramework/wiki/Decoding-video-from-RTSP-camera) with ffmpeg works with problematic streams if we replace {'bsf:v': 'h264_mp4toannexb'}
with {'bsf:v': 'h264_mp4toannexb,dump_extra'}
.
Happy to make a PR for this extremely small, but hopefully useful minuscule fix.
@mfoglio
Thank you for your work! It's very significant!
Could you sharing your final code which decode as many RTSP streams as possible on a single GPU?
Originally posted by @mfoglio in https://github.com/NVIDIA/VideoProcessingFramework/issues/257#issuecomment-952302246
Hi @stu-github , I am waiting for @rarzumanyan to finish up some fixes. I will share the code as soon as we have something more stable working
Hi @stu-github and @mfoglio
Just to notice: I’m in process of development of feature that shall allow to pass open file handles to demuxer which shall make RTSP cameras access easier, it’s not a fix. This is taking longer then expected.
Meanwhile you can use the code sample from project wiki which illustrates how to read frames from RTSP camera with ffmpeg process.
There’s one caviar which I’d like to address with mentioned new feature: right now one can only read from ffmpeg output in fixed size chunks. But actual compressed frames may be of different size which requires you to fine tune the speed VPF reads from ffmpeg pipe.
Hi @mfoglio and @stu-github
Please take a look at v1.1.1
branch ToT, it has SampleDecodeRTSP.py
sample is aimed to illustrate this use case.
It's different in comparison to other samples, here are key points:
streambuf_support
branch, the code becomes too difficult to develop and support. Good news are that it's not necessary, so not a big deal.python-ffmpeg
module. It's not different from plain FFMpeg process in terms of supported features, I've just spent too much time trying to find a way to pass usual CLI arguments to python-ffmpeg
. If you're more familiar with python-ffmpeg
I assume this shall not be a problem.dump_extra
bitstream filter. Without that SPS and PPS NALU are often missing from bitstream, hence decoder can't be properly initialized. I've used h264_mp4toannexb,dump_extra=all
bitstream filter chain to preserve as many data as possible.Launch rtsp server
Launch multiple ffmpeg sessions each of which loops input video and submits it to rtsp server:
ffmpeg -re -stream_loop 1 -c:v h264_cuvid -i ~/Videos/bbb_720p.mp4 -c:v h264_nvenc -preset ll -g 60 -b:v 512K -f rtsp rtsp://localhost:8554/live.stream720p
ffmpeg -re -stream_loop 1 -c:v h264_cuvid -resize 960x540 -i ~/Videos/bbb_720p.mp4 -c:v h264_nvenc -preset ll -g 60 -b:v 256K -f rtsp rtsp://localhost:8554/live.stream540p
ffmpeg -re -stream_loop 1 -c:v h264_cuvid -resize 640x360 -i ~/Videos/bbb_720p.mp4 -c:v h264_nvenc -preset ll -g 60 -b:v 128K -f rtsp rtsp://localhost:8554/live.stream360p
Launch sample with 16 RTSP connections:
python3 ./SampleDecodeRTSP.py 0 \
rtsp://localhost:8554/live.stream720p rtsp://localhost:8554/live.stream720p rtsp://localhost:8554/live.stream720p \
rtsp://localhost:8554/live.stream720p rtsp://localhost:8554/live.stream720p rtsp://localhost:8554/live.stream720p \
rtsp://localhost:8554/live.stream540p rtsp://localhost:8554/live.stream540p rtsp://localhost:8554/live.stream540p \
rtsp://localhost:8554/live.stream540p rtsp://localhost:8554/live.stream540p rtsp://localhost:8554/live.stream540p \
rtsp://localhost:8554/live.stream360p rtsp://localhost:8554/live.stream360p rtsp://localhost:8554/live.stream360p \
rtsp://localhost:8554/live.stream360p rtsp://localhost:8554/live.stream360p rtsp://localhost:8554/live.stream360p
Left it overnight, works fine.
Please make sure you're building VPF in Release
mode with debugging features such as TRACK_TOKEN_ALLOCATIONS
being turned off.
How to track memory usage:
watch nvidia-smi
vRAM usage is stable on my machine. RTSP clients were respawned many times but decoding goes on. Please make sure you don't read data from FFMpeg pipe in simple 4Kb chunks, SampleDecodeRTSP.py
covers this topic.
nvidia-smi dmon
readings for 3 RTSP clients:
Nvdec usage is 0% before I start streaming. Then is goes to ~3% as streaming is started. Then it's 5% as RTSP clients are connected and running.
Hi @rarzumanyan , thank you! That is fantastic, I can't wait to try it. Just to be sure I have a correct setup, how can I build VPF in release mode as you were saying? Would the following work or do I need to explicitly set some parameters?
cmake .. \
-DFFMPEG_DIR:PATH="$PATH_TO_FFMPEG" \
-DVIDEO_CODEC_SDK_DIR:PATH="$PATH_TO_SDK" \
-DGENERATE_PYTHON_BINDINGS:BOOL="1" \
-DGENERATE_PYTORCH_EXTENSION:BOOL="1" \
-DCMAKE_INSTALL_PREFIX:PATH="$INSTALL_PREFIX" \
-DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.6 \
-DPYTHON_EXECUTABLE=/home/ubuntu/pycharm/venv/bin/python3
@mfoglio
AFAIK default CMake build type is RelWithDebInfo
(release optimizations + no symbols stripping for meaningful stack traces), so unless you set Debug
build type explicitly, you're good to go. Same thing with TRACK_TOKEN_ALLOCATIONS
, it's off by default. Your current build config looks fine.
This TRACK_TOKEN_ALLOCATIONS
option is the trap I've got into during performance investigation, so I just wanted to make sure you don't repeat my mistake ) It serializes all memory allocations and releases and kills multi-threaded performance.
P. S.
Before going any further with actual video processing pipeline, I recommend you to check the SampleDecodeRTSP.py
on actual RTSP streams in case you came across any issues - it would be easier to investigate on a smaller piece of code.
Hi @rarzumanyan , it seems that in my case nvc.PyFFmpegDemuxer(url, {})
throws an error. Can you try to run the following?
nvc.PyFFmpegDemuxer("rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov", dict())
I receive the following error:
Can't open rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov: Protocol not found
Traceback (most recent call last):
File "/usr/lib/python3.6/code.py", line 91, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
ValueError: FFmpegDemuxer: no AVFormatContext provided.
Note that this is a public stream.
Hi @mfoglio
This happens when PyFFmpegDemuxer
is instantiated to get video parameters:
nvdmx = nvc.PyFFmpegDemuxer(url, {})
w = nvdmx.Width()
h = nvdmx.Height()
f = nvdmx.Format()
c = nvdmx.Codec()
If you know them in advance, you can remove these lines.
Regarding the error
Can't open rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov: Protocol not found
FFmpegDemuxer: no AVFormatContext provided
There's nothing I can do about it, avformat library struggles to establish a connection.
However, if you comment out the PyFfmpegDemuxer
class instantiation, problem shall no longer exist.
Can you try to run the following?
I did that yesterday while testing the patch.\ Sometimes it works, sometimes not. I assume that the issue is caused by low network bandwidth and possible loss of data. It never reproduces on videos I've been streaming over local network or localhost loop network interface.
P. S. It also depends on the way your RTSP video is encoded. Properly set encoder will use infinite GOP + intra refresh + periodic SPS and PPS NALU insertion. If SPS and PPS aren't periodically inserted or period is too long, client may be unable to configure decoder because of absence of these NALU or by timeout.
I suppose that mature video libraries like FFMpeg and GStreamer mitigate this on application level unlike VPF which only uses libavformat and libavcodec basic features.
I tried to set the values manually but the code throws an out-of-memory error after about 1-2 minutes when decoding a single 1080p stream. I am using the following parameters:
w = 1920 #nvdmx.Width()
h = 1080 # nvdmx.Height()
f = nvc.PixelFormat.YUV420 # nvc.Pinvdmx.Format()
c = nvc.CudaVideoCodec.H264 # nvdmx.Codec()
Based on ffprobe
output:
ffprobe version n4.4.1 Copyright (c) 2007-2021 the FFmpeg developers
built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
configuration: --prefix=/home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared --disable-static --disable-stripping --disable-doc --enable-shared --enable-openssl --enable-network --enable-protocol=tcp --enable-demuxer=rtsp --enable-decoder=h264
libavutil 56. 70.100 / 56. 70.100
libavcodec 58.134.100 / 58.134.100
libavformat 58. 76.100 / 58. 76.100
libavdevice 58. 13.100 / 58. 13.100
libavfilter 7.110.100 / 7.110.100
libswscale 5. 9.100 / 5. 9.100
libswresample 3. 9.100 / 3. 9.100
[rtsp @ 0x560d2831ad80] method SETUP failed: 461 Unsupported transport
[rtsp @ 0x560d2831ad80] DTS discontinuity in stream 0: packet 22 with DTS 33000, packet 23 with DTS 3254983
Input #0, rtsp, from 'rtsp://localhost/cbb48b92-f74d-4ad5-b8a9-3affbefcc17e_default/0d943055-d4f2-49d2-a8fa-189176228ae1':
Duration: N/A, start: 0.100000, bitrate: N/A
Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 60 fps, 60 tbr, 90k tbn, 120 tbc
EDIT: I think this is happening because the video is not decoded correctly. All surfaces are empty, and therefore the read_size
value never gets updated. I am trying to understand if the parameters above are correct but I am not sure how I could do that.
@rarzumanyan what parameters are you using for the video stream rtsp://wowzaec2demo.streamlock.net/vod/mp4:BigBuckBunny_115k.mov
?
@mfoglio
w = 1920 #nvdmx.Width()
h = 1080 # nvdmx.Height()
f = nvc.PixelFormat.YUV420 # nvc.Pinvdmx.Format()
c = nvc.CudaVideoCodec.H264 # nvdmx.Codec()
Pixel format shall be nvc.PixelFormat.NV12
, this is native Nvdec format.
P. S.
If your 1080p stream is available via LAN you shall not have issue using PyFfmpegDemuxer
to get parameters.
With NV12
I get:
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/ubuntu/pycharm/projects/fususcore-ai-detector-nvidia/src/components/workers/video/vpf/SampleDecodeRTSP.py", line 112, in rtsp_client
if pkt_data.bsl < read_size:
AttributeError: 'PyNvCodec.PacketData' object has no attribute 'bsl'
av_interleaved_write_frame(): Broken pipe
Error writing trailer of pipe:1: Broken pipe
frame= 25 fps=0.0 q=-1.0 Lsize= 9kB time=00:00:00.95 bitrate= 74.0kbits/s speed=9.24x
video:12kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
@mfoglio
Make sure you use v1.1.1
branch ToT as it was mentioned in one of previous comments. It has the bsl
attribute exported to Python:
https://github.com/NVIDIA/VideoProcessingFramework/blob/6a2b2812215fda5c56ba8a96e1ea685d46a23d03/PyNvCodec/src/PyNvCodec.cpp#L318-L324
That's strange, I am currently using that branch:
>> cd /home/ubuntu/pycharm/libs/VideoProcessingFramework
>> git branch
* (HEAD detached at v1.1.1)
master
I really don't know how that would be possible.
I even checked the folder in Python: nvc.__file__
: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/PyNvCodec.cpython-36m-x86_64-linux-gnu.so
Maybe your Python module loads old shared libraries or something got cached. Please check that as well and delete pycache folder if it’s there.
I deleted the __pycache__
folders but the problem persists. Could it because I am using the Video Code SDK 10.0.26?
No, that’s not related to Video Codec SDK.
This problem means that structure PacketData
which is defined in C++ part of VPF code isn’t exported to Python module.
One more thing to do is to delete CMake cache, run make clean
in your build directory, then configure and build VPF from scratch.
Also, when running make install
please make sure new shared libraries and Python modules are copied into destination folder.
One strange thing that I noticed is the Install configuration: ""
during the make install
. I guess the make clean
should not be needed if I remove the VideoProcessingFramework
, right?
Here's how I install it:
export PATH_TO_SDK=~/pycharm/libs/Video_Codec_SDK_10.0.26
export PATH_TO_FFMPEG=~/pycharm/libs/FFmpeg/build_x64_release_shared
export LD_LIBRARY_PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/bin:${LD_LIBRARY_PATH}"
export PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/bin:${PATH}"
sudo ldconfig
export CUDACXX=/usr/local/cuda/bin/nvcc
git clone https://github.com/NVIDIA/VideoProcessingFramework.git
cd VideoProcessingFramework
git checkout v1.1.1
cd ..
cd VideoProcessingFramework
export INSTALL_PREFIX=$(pwd)/install
mkdir -p install
mkdir -p build
cd build
source /home/ubuntu/pycharm/venv/bin/activate
cmake .. \
-DFFMPEG_DIR:PATH="$PATH_TO_FFMPEG" \
-DVIDEO_CODEC_SDK_DIR:PATH="$PATH_TO_SDK" \
-DGENERATE_PYTHON_BINDINGS:BOOL="1" \
-DGENERATE_PYTORCH_EXTENSION:BOOL="1" \
-DCMAKE_INSTALL_PREFIX:PATH="$INSTALL_PREFIX" \
-DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \
-DPYTHON_INCLUDE_DIR=/usr/include/python3.6 \
-DPYTHON_EXECUTABLE=/home/ubuntu/pycharm/venv/bin/python3
make
make install
cd ../install/bin/
ldd PyNvCodec.cpython-36m-x86_64-linux-gnu.so
echo '# VPF' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH='$PATH_TO_FFMPEG'/lib:$LD_LIBRARY_PATH' >> ~/.bashrc # see below to find correct folder
echo 'export LD_LIBRARY_PATH='$(pwd)':$LD_LIBRARY_PATH' >> ~/.bashrc # see below to find correct folder
source ~/.bashrc
echo 'export PYTHONPATH="~/pycharm/libs/VideoProcessingFramework/install/bin:${PYTHONPATH}"' >> ~/.bashrc
echo 'export PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/bin:${PATH}"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="~/pycharm/libs/VideoProcessingFramework/install/bin:${LD_LIBRARY_PATH}"' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/lib:${LD_LIBRARY_PATH}"' >> ~/.bashrc
source ~/.bashrc
Full output (note that here I don't export anything to ~/.bashrc
because I have already done it:
ubuntu@ip-172-31-9-127:~/pycharm/libs$ export PATH_TO_SDK=~/pycharm/libs/Video_Codec_SDK_10.0.26
ubuntu@ip-172-31-9-127:~/pycharm/libs$ export PATH_TO_FFMPEG=~/pycharm/libs/FFmpeg/build_x64_release_shared
ubuntu@ip-172-31-9-127:~/pycharm/libs$ export LD_LIBRARY_PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/bin:${LD_LIBRARY_PATH}"
ubuntu@ip-172-31-9-127:~/pycharm/libs$ export PATH="~/pycharm/libs/FFmpeg/build_x64_release_shared/bin:${PATH}"
ubuntu@ip-172-31-9-127:~/pycharm/libs$ sudo ldconfig
ubuntu@ip-172-31-9-127:~/pycharm/libs$
ubuntu@ip-172-31-9-127:~/pycharm/libs$ export CUDACXX=/usr/local/cuda/bin/nvcc
ubuntu@ip-172-31-9-127:~/pycharm/libs$ git clone https://github.com/NVIDIA/VideoProcessingFramework.git
Cloning into 'VideoProcessingFramework'...
remote: Enumerating objects: 2682, done.
remote: Counting objects: 100% (1203/1203), done.
remote: Compressing objects: 100% (491/491), done.
remote: Total 2682 (delta 954), reused 923 (delta 711), pack-reused 1479
Receiving objects: 100% (2682/2682), 1.56 MiB | 32.60 MiB/s, done.
Resolving deltas: 100% (1729/1729), done.
ubuntu@ip-172-31-9-127:~/pycharm/libs$ cd VideoProcessingFramework
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ git checkout v1.1.1
Note: switching to 'v1.1.1'.
You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by switching back to a branch.
If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -c with the switch command. Example:
git switch -c <new-branch-name>
Or undo this operation with:
git switch -
Turn off this advice by setting config variable advice.detachedHead to false
HEAD is now at 3b1f667 Fixing issues with PyFfmpegDemuxer ctor
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ cd ..
ubuntu@ip-172-31-9-127:~/pycharm/libs$ cd VideoProcessingFramework
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ export INSTALL_PREFIX=$(pwd)/install
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ mkdir -p install
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ mkdir -p build
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework$ cd build
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$
ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$ source /home/ubuntu/pycharm/venv/bin/activate
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$ cmake .. \
> -DFFMPEG_DIR:PATH="$PATH_TO_FFMPEG" \
> -DVIDEO_CODEC_SDK_DIR:PATH="$PATH_TO_SDK" \
> -DGENERATE_PYTHON_BINDINGS:BOOL="1" \
> -DGENERATE_PYTORCH_EXTENSION:BOOL="1" \
> -DCMAKE_INSTALL_PREFIX:PATH="$INSTALL_PREFIX" \
> -DPYTHON_LIBRARY=/usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so \
> -DPYTHON_INCLUDE_DIR=/usr/include/python3.6 \
> -DPYTHON_EXECUTABLE=/home/ubuntu/pycharm/venv/bin/python3
-- The C compiler identification is GNU 7.5.0
-- The CXX compiler identification is GNU 7.5.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 11.1.105
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc -- works
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Searching for FFmpeg libs in /home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared/lib
-- Searching for FFmpeg headers in /home/ubuntu/pycharm/libs/FFmpeg/build_x64_release_shared/include
-- Searching for Video Codec SDK headers in /home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/include folder
-- Searching for Video Codec SDK headers in /home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface folder
-- Found PythonLibs: /usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so (found suitable version "3.6.9", minimum required is "3.5")
-- Found PythonInterp: /home/ubuntu/pycharm/venv/bin/python3 (found version "3.6.9")
-- Found PythonLibs: /usr/lib/python3.6/config-3.6m-x86_64-linux-gnu/libpython3.6.so
-- pybind11 v2.3.dev0
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- LTO enabled
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/pycharm/libs/VideoProcessingFramework/build
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$ make
Scanning dependencies of target TC_CORE
[ 3%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Task.cpp.o
[ 7%] Building CXX object PyNvCodec/TC/TC_CORE/CMakeFiles/TC_CORE.dir/src/Token.cpp.o
[ 11%] Linking CXX shared library libTC_CORE.so
[ 11%] Built target TC_CORE
Scanning dependencies of target PytorchNvCodec
[ 14%] Generating Pytorch_Nv_Codec
running build
running build_ext
/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
building 'PytorchNvCodec' extension
creating build
creating build/temp.linux-x86_64-3.6
x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include -I/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include -I/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/TH -I/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/THC -I/usr/local/cuda-11.1/include -I/home/ubuntu/pycharm/venv/include -I/usr/include/python3.6m -c PytorchNvCodec.cpp -o build/temp.linux-x86_64-3.6/PytorchNvCodec.o -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1011" -DTORCH_EXTENSION_NAME=PytorchNvCodec -D_GLIBCXX_USE_CXX11_ABI=0 -std=c++14
In file included from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/ATen/Parallel.h:140:0,
from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/utils.h:3,
from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn/cloneable.h:5,
from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/nn.h:3,
from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/csrc/api/include/torch/all.h:13,
from /home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/torch/extension.h:4,
from PytorchNvCodec.cpp:16:
/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/include/ATen/ParallelOpenMP.h:87:0: warning: ignoring #pragma omp parallel [-Wunknown-pragmas]
#pragma omp parallel for if ((end - begin) >= grain_size)
x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/PytorchNvCodec.o -L/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/lib -L/usr/local/cuda-11.1/lib64 -lc10 -ltorch -ltorch_cpu -ltorch_python -lcudart -lc10_cuda -ltorch_cuda_cu -ltorch_cuda_cpp -o /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/PytorchNvCodec.cpython-36m-x86_64-linux-gnu.so
[ 14%] Built target PytorchNvCodec
Scanning dependencies of target TC
[ 18%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/MemoryInterfaces.cpp.o
[ 22%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/Tasks.cpp.o
[ 25%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/TasksColorCvt.cpp.o
[ 29%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/FFmpegDemuxer.cpp.o
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp: In constructor ‘FFmpegDemuxer::FFmpegDemuxer(AVFormatContext*)’:
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:515:42: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
gop_size = fmtc->streams[videoStream]->codec->gop_size;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:515:42: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
gop_size = fmtc->streams[videoStream]->codec->gop_size;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:515:42: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
gop_size = fmtc->streams[videoStream]->codec->gop_size;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:527:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_space = fmtc->streams[videoStream]->codec->colorspace;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:527:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_space = fmtc->streams[videoStream]->codec->colorspace;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:527:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_space = fmtc->streams[videoStream]->codec->colorspace;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:528:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_range = fmtc->streams[videoStream]->codec->color_range;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:528:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_range = fmtc->streams[videoStream]->codec->color_range;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:528:45: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
color_range = fmtc->streams[videoStream]->codec->color_range;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/FFmpegDemuxer.h:24:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FFmpegDemuxer.cpp:14:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
[ 33%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvDecoder.cpp.o
[ 37%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvEncoder.cpp.o
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvEncoder.cpp: In member function ‘void NvEncoder::CreateEncoder(const NV_ENC_INITIALIZE_PARAMS*)’:
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvEncoder.cpp:232:40: warning: ‘NV_ENC_PRESET_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
NV_ENC_PRESET_DEFAULT_GUID,
^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/NvEncoder.h:16:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvEncoder.cpp:14:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:205:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~
[ 40%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvEncoderCuda.cpp.o
[ 44%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NppCommon.cpp.o
[ 48%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/NvCodecCliOptions.cpp.o
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp: In lambda function:
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:131:36: warning: ‘NV_ENC_PRESET_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
{"default", PresetProperties(NV_ENC_PRESET_DEFAULT_GUID, false, false)},
^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:205:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:132:31: warning: ‘NV_ENC_PRESET_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
{"hp", PresetProperties(NV_ENC_PRESET_HP_GUID, false, false)},
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:209:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:133:31: warning: ‘NV_ENC_PRESET_HQ_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
{"hq", PresetProperties(NV_ENC_PRESET_HQ_GUID, false, false)},
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:213:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_HQ_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:134:31: warning: ‘NV_ENC_PRESET_BD_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
{"bd", PresetProperties(NV_ENC_PRESET_BD_GUID, false, false)},
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:217:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_BD_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:136:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
PresetProperties(NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID, true, false)},
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:221:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:138:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
PresetProperties(NV_ENC_PRESET_LOW_LATENCY_HP_GUID, true, false)},
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:229:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:140:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_HQ_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
PresetProperties(NV_ENC_PRESET_LOW_LATENCY_HQ_GUID, true, false)},
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:225:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_HQ_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:142:25: warning: ‘NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
PresetProperties(NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID, false, true)},
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:233:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:144:25: warning: ‘NV_ENC_PRESET_LOSSLESS_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
PresetProperties(NV_ENC_PRESET_LOSSLESS_HP_GUID, false, true)}
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:237:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOSSLESS_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp: In function ‘std::__cxx11::string ToString(const GUID&)’:
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:275:23: warning: ‘NV_ENC_PRESET_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
else if (IsSameGuid(NV_ENC_PRESET_DEFAULT_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:205:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:277:25: warning: ‘NV_ENC_PRESET_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_HP_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:209:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:279:25: warning: ‘NV_ENC_PRESET_HQ_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_HQ_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:213:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_HQ_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:281:25: warning: ‘NV_ENC_PRESET_BD_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_BD_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:217:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_BD_GUID =
^~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:283:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:221:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:285:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_HQ_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_LOW_LATENCY_HQ_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:225:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_HQ_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:287:25: warning: ‘NV_ENC_PRESET_LOW_LATENCY_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_LOW_LATENCY_HP_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:229:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOW_LATENCY_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:289:25: warning: ‘NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:233:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOSSLESS_DEFAULT_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:291:25: warning: ‘NV_ENC_PRESET_LOSSLESS_HP_GUID’ is deprecated: WILL BE REMOVED IN A FUTURE VIDEO CODEC SDK VERSION [-Wdeprecated-declarations]
} else if (IsSameGuid(NV_ENC_PRESET_LOSSLESS_HP_GUID, guid)) {
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/inc/MemoryInterfaces.hpp:19:0,
from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/NvCodecCliOptions.cpp:20:
/home/ubuntu/pycharm/libs/Video_Codec_SDK_10.0.26/Interface/nvEncodeAPI.h:237:37: note: declared here
NV_ENC_DEPRECATED static const GUID NV_ENC_PRESET_LOSSLESS_HP_GUID =
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
[ 51%] Building CXX object PyNvCodec/TC/CMakeFiles/TC.dir/src/FfmpegSwDecoder.cpp.o
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp: In constructor ‘VPF::FfmpegDecodeFrame_Impl::FfmpegDecodeFrame_Impl(const char*, AVDictionary*)’:
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:106:49: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avctx = fmt_ctx->streams[video_stream_idx]->codec;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:22:0:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:106:49: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avctx = fmt_ctx->streams[video_stream_idx]->codec;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:22:0:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
/home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:106:49: warning: ‘AVStream::codec’ is deprecated [-Wdeprecated-declarations]
avctx = fmt_ctx->streams[video_stream_idx]->codec;
^~~~~
In file included from /home/ubuntu/pycharm/libs/VideoProcessingFramework/PyNvCodec/TC/src/FfmpegSwDecoder.cpp:22:0:
/usr/include/x86_64-linux-gnu/libavformat/avformat.h:893:21: note: declared here
AVCodecContext *codec;
^~~~~
[ 55%] Linking CXX shared library libTC.so
[ 55%] Built target TC
Scanning dependencies of target PyNvCodec
[ 59%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyNvCodec.cpp.o
[ 62%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyFrameUploader.cpp.o
[ 66%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyBufferUploader.cpp.o
[ 70%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PySurfaceDownloader.cpp.o
[ 74%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyCudaBufferDownloader.cpp.o
[ 77%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PySurfaceConverter.cpp.o
[ 81%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PySurfaceResizer.cpp.o
[ 85%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyFFMpegDecoder.cpp.o
[ 88%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyFFMpegDemuxer.cpp.o
[ 92%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyNvDecoder.cpp.o
[ 96%] Building CXX object PyNvCodec/CMakeFiles/PyNvCodec.dir/src/PyNvEncoder.cpp.o
[100%] Linking CXX shared library PyNvCodec.cpython-36m-x86_64-linux-gnu.so
[100%] Built target PyNvCodec
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$ make install
[ 11%] Built target TC_CORE
[ 14%] Generating Pytorch_Nv_Codec
running build
running build_ext
/home/ubuntu/pycharm/venv/lib/python3.6/site-packages/torch/utils/cpp_extension.py:370: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
[ 14%] Built target PytorchNvCodec
[ 55%] Built target TC
[100%] Built target PyNvCodec
Install the project...
-- Install configuration: ""
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/libTC_CORE.so
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/libTC.so
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/PyNvCodec.cpython-36m-x86_64-linux-gnu.so
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleDecode.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleEncode.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleDecodeSw.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleDecodeMultiThread.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleEncodeMultiThread.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleDemuxDecode.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SamplePyTorch.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleTensorRTResnet.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/SampleTorchResnet.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/Tests.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/test_PyNvDecoder.py
-- Installing: /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/test.mp4
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/build$ cd ../install/bin/
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/install/bin$ ldd PyNvCodec.cpython-36m-x86_64-linux-gnu.so
linux-vdso.so.1 (0x00007f91eb41f000)
libTC.so => /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/libTC.so (0x00007f91eabc6000)
libpython3.6m.so.1.0 => /usr/lib/x86_64-linux-gnu/libpython3.6m.so.1.0 (0x00007f91ea51b000)
libTC_CORE.so => /home/ubuntu/pycharm/libs/VideoProcessingFramework/install/bin/libTC_CORE.so (0x00007f91ea315000)
libcuda.so.1 => /usr/lib/x86_64-linux-gnu/libcuda.so.1 (0x00007f91e8cea000)
libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007f91e8961000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007f91e8749000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f91e8358000)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f91e8139000)
libnvcuvid.so.1 => /usr/lib/x86_64-linux-gnu/libnvcuvid.so.1 (0x00007f91e7ae2000)
libavutil.so.55 => /usr/lib/x86_64-linux-gnu/libavutil.so.55 (0x00007f91e7855000)
libavcodec.so.57 => /usr/lib/x86_64-linux-gnu/libavcodec.so.57 (0x00007f91e6133000)
libavformat.so.57 => /usr/lib/x86_64-linux-gnu/libavformat.so.57 (0x00007f91e5cd8000)
libnppig.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnppig.so.11 (0x00007f91e365c000)
libnppicc.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnppicc.so.11 (0x00007f91e2e74000)
libnppidei.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnppidei.so.11 (0x00007f91e22cf000)
libexpat.so.1 => /lib/x86_64-linux-gnu/libexpat.so.1 (0x00007f91e209d000)
libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f91e1e80000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f91e1c7c000)
libutil.so.1 => /lib/x86_64-linux-gnu/libutil.so.1 (0x00007f91e1a79000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007f91e16db000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007f91e14d3000)
/lib64/ld-linux-x86-64.so.2 (0x00007f91eb1f7000)
libX11.so.6 => /usr/lib/x86_64-linux-gnu/libX11.so.6 (0x00007f91e119b000)
libdrm.so.2 => /usr/lib/x86_64-linux-gnu/libdrm.so.2 (0x00007f91e0f8a000)
libvdpau.so.1 => /usr/lib/x86_64-linux-gnu/libvdpau.so.1 (0x00007f91e0d86000)
libva.so.2 => /usr/lib/x86_64-linux-gnu/libva.so.2 (0x00007f91e0b65000)
libva-x11.so.2 => /usr/lib/x86_64-linux-gnu/libva-x11.so.2 (0x00007f91e095f000)
libva-drm.so.2 => /usr/lib/x86_64-linux-gnu/libva-drm.so.2 (0x00007f91e075c000)
libswresample.so.2 => /usr/lib/x86_64-linux-gnu/libswresample.so.2 (0x00007f91e053d000)
libwebp.so.6 => /usr/lib/x86_64-linux-gnu/libwebp.so.6 (0x00007f91e02d4000)
libcrystalhd.so.3 => /usr/lib/x86_64-linux-gnu/libcrystalhd.so.3 (0x00007f91e00b9000)
libzvbi.so.0 => /usr/lib/x86_64-linux-gnu/libzvbi.so.0 (0x00007f91dfe2e000)
libxvidcore.so.4 => /usr/lib/x86_64-linux-gnu/libxvidcore.so.4 (0x00007f91dfb1d000)
libx265.so.146 => /usr/lib/x86_64-linux-gnu/libx265.so.146 (0x00007f91dee9c000)
libx264.so.152 => /usr/lib/x86_64-linux-gnu/libx264.so.152 (0x00007f91deaf7000)
libwebpmux.so.3 => /usr/lib/x86_64-linux-gnu/libwebpmux.so.3 (0x00007f91de8ed000)
libwavpack.so.1 => /usr/lib/x86_64-linux-gnu/libwavpack.so.1 (0x00007f91de6c3000)
libvpx.so.5 => /usr/lib/x86_64-linux-gnu/libvpx.so.5 (0x00007f91de277000)
libvorbisenc.so.2 => /usr/lib/x86_64-linux-gnu/libvorbisenc.so.2 (0x00007f91ddfce000)
libvorbis.so.0 => /usr/lib/x86_64-linux-gnu/libvorbis.so.0 (0x00007f91ddda3000)
libtwolame.so.0 => /usr/lib/x86_64-linux-gnu/libtwolame.so.0 (0x00007f91ddb80000)
libtheoraenc.so.1 => /usr/lib/x86_64-linux-gnu/libtheoraenc.so.1 (0x00007f91dd941000)
libtheoradec.so.1 => /usr/lib/x86_64-linux-gnu/libtheoradec.so.1 (0x00007f91dd723000)
libspeex.so.1 => /usr/lib/x86_64-linux-gnu/libspeex.so.1 (0x00007f91dd509000)
libsnappy.so.1 => /usr/lib/x86_64-linux-gnu/libsnappy.so.1 (0x00007f91dd301000)
libshine.so.3 => /usr/lib/x86_64-linux-gnu/libshine.so.3 (0x00007f91dd0f6000)
librsvg-2.so.2 => /usr/lib/x86_64-linux-gnu/librsvg-2.so.2 (0x00007f91dcebe000)
libgobject-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0 (0x00007f91dcc6a000)
libglib-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0 (0x00007f91dc953000)
libcairo.so.2 => /usr/lib/x86_64-linux-gnu/libcairo.so.2 (0x00007f91dc636000)
libopus.so.0 => /usr/lib/x86_64-linux-gnu/libopus.so.0 (0x00007f91dc3ec000)
libopenjp2.so.7 => /usr/lib/x86_64-linux-gnu/libopenjp2.so.7 (0x00007f91dc196000)
libmp3lame.so.0 => /usr/lib/x86_64-linux-gnu/libmp3lame.so.0 (0x00007f91dbf1f000)
libgsm.so.1 => /usr/lib/x86_64-linux-gnu/libgsm.so.1 (0x00007f91dbd12000)
liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5 (0x00007f91dbaec000)
libssh-gcrypt.so.4 => /usr/lib/x86_64-linux-gnu/libssh-gcrypt.so.4 (0x00007f91db878000)
libopenmpt.so.0 => /usr/lib/x86_64-linux-gnu/libopenmpt.so.0 (0x00007f91db4b0000)
libbluray.so.2 => /usr/lib/x86_64-linux-gnu/libbluray.so.2 (0x00007f91db260000)
libgnutls.so.30 => /usr/lib/x86_64-linux-gnu/libgnutls.so.30 (0x00007f91daefa000)
libxml2.so.2 => /usr/lib/x86_64-linux-gnu/libxml2.so.2 (0x00007f91dab39000)
libgme.so.0 => /usr/lib/x86_64-linux-gnu/libgme.so.0 (0x00007f91da8ed000)
libchromaprint.so.1 => /usr/lib/x86_64-linux-gnu/libchromaprint.so.1 (0x00007f91da6da000)
libbz2.so.1.0 => /lib/x86_64-linux-gnu/libbz2.so.1.0 (0x00007f91da4ca000)
libnppc.so.11 => /usr/local/cuda-11.1/targets/x86_64-linux/lib/libnppc.so.11 (0x00007f91da242000)
libxcb.so.1 => /usr/lib/x86_64-linux-gnu/libxcb.so.1 (0x00007f91da01a000)
libXext.so.6 => /usr/lib/x86_64-linux-gnu/libXext.so.6 (0x00007f91d9e08000)
libXfixes.so.3 => /usr/lib/x86_64-linux-gnu/libXfixes.so.3 (0x00007f91d9c02000)
libsoxr.so.0 => /usr/lib/x86_64-linux-gnu/libsoxr.so.0 (0x00007f91d999f000)
libpng16.so.16 => /usr/lib/x86_64-linux-gnu/libpng16.so.16 (0x00007f91d976d000)
libnuma.so.1 => /usr/lib/x86_64-linux-gnu/libnuma.so.1 (0x00007f91d9562000)
libogg.so.0 => /usr/lib/x86_64-linux-gnu/libogg.so.0 (0x00007f91d9359000)
libgdk_pixbuf-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgdk_pixbuf-2.0.so.0 (0x00007f91d9135000)
libgio-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgio-2.0.so.0 (0x00007f91d8d96000)
libpangocairo-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangocairo-1.0.so.0 (0x00007f91d8b89000)
libpangoft2-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpangoft2-1.0.so.0 (0x00007f91d8973000)
libpango-1.0.so.0 => /usr/lib/x86_64-linux-gnu/libpango-1.0.so.0 (0x00007f91d8726000)
libfontconfig.so.1 => /usr/lib/x86_64-linux-gnu/libfontconfig.so.1 (0x00007f91d84e1000)
libcroco-0.6.so.3 => /usr/lib/x86_64-linux-gnu/libcroco-0.6.so.3 (0x00007f91d82a6000)
libffi.so.6 => /usr/lib/x86_64-linux-gnu/libffi.so.6 (0x00007f91d809e000)
libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f91d7e2c000)
libpixman-1.so.0 => /usr/lib/x86_64-linux-gnu/libpixman-1.so.0 (0x00007f91d7b87000)
libfreetype.so.6 => /usr/lib/x86_64-linux-gnu/libfreetype.so.6 (0x00007f91d78d3000)
libxcb-shm.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-shm.so.0 (0x00007f91d76d0000)
libxcb-render.so.0 => /usr/lib/x86_64-linux-gnu/libxcb-render.so.0 (0x00007f91d74c3000)
libXrender.so.1 => /usr/lib/x86_64-linux-gnu/libXrender.so.1 (0x00007f91d72b9000)
libgcrypt.so.20 => /lib/x86_64-linux-gnu/libgcrypt.so.20 (0x00007f91d6f9d000)
libgssapi_krb5.so.2 => /usr/lib/x86_64-linux-gnu/libgssapi_krb5.so.2 (0x00007f91d6d52000)
libmpg123.so.0 => /usr/lib/x86_64-linux-gnu/libmpg123.so.0 (0x00007f91d6af3000)
libvorbisfile.so.3 => /usr/lib/x86_64-linux-gnu/libvorbisfile.so.3 (0x00007f91d68eb000)
libp11-kit.so.0 => /usr/lib/x86_64-linux-gnu/libp11-kit.so.0 (0x00007f91d65bc000)
libidn2.so.0 => /usr/lib/x86_64-linux-gnu/libidn2.so.0 (0x00007f91d639f000)
libunistring.so.2 => /usr/lib/x86_64-linux-gnu/libunistring.so.2 (0x00007f91d6021000)
libtasn1.so.6 => /usr/lib/x86_64-linux-gnu/libtasn1.so.6 (0x00007f91d5e0e000)
libnettle.so.6 => /usr/lib/x86_64-linux-gnu/libnettle.so.6 (0x00007f91d5bd8000)
libhogweed.so.4 => /usr/lib/x86_64-linux-gnu/libhogweed.so.4 (0x00007f91d59a2000)
libgmp.so.10 => /usr/lib/x86_64-linux-gnu/libgmp.so.10 (0x00007f91d5721000)
libicuuc.so.60 => /usr/lib/x86_64-linux-gnu/libicuuc.so.60 (0x00007f91d5369000)
libXau.so.6 => /usr/lib/x86_64-linux-gnu/libXau.so.6 (0x00007f91d5165000)
libXdmcp.so.6 => /usr/lib/x86_64-linux-gnu/libXdmcp.so.6 (0x00007f91d4f5f000)
libgomp.so.1 => /usr/lib/x86_64-linux-gnu/libgomp.so.1 (0x00007f91d4d30000)
libgmodule-2.0.so.0 => /usr/lib/x86_64-linux-gnu/libgmodule-2.0.so.0 (0x00007f91d4b2c000)
libselinux.so.1 => /lib/x86_64-linux-gnu/libselinux.so.1 (0x00007f91d4904000)
libresolv.so.2 => /lib/x86_64-linux-gnu/libresolv.so.2 (0x00007f91d46ea000)
libmount.so.1 => /lib/x86_64-linux-gnu/libmount.so.1 (0x00007f91d4496000)
libharfbuzz.so.0 => /usr/lib/x86_64-linux-gnu/libharfbuzz.so.0 (0x00007f91d41f8000)
libthai.so.0 => /usr/lib/x86_64-linux-gnu/libthai.so.0 (0x00007f91d3fef000)
libgpg-error.so.0 => /lib/x86_64-linux-gnu/libgpg-error.so.0 (0x00007f91d3dda000)
libkrb5.so.3 => /usr/lib/x86_64-linux-gnu/libkrb5.so.3 (0x00007f91d3b04000)
libk5crypto.so.3 => /usr/lib/x86_64-linux-gnu/libk5crypto.so.3 (0x00007f91d38d2000)
libcom_err.so.2 => /lib/x86_64-linux-gnu/libcom_err.so.2 (0x00007f91d36ce000)
libkrb5support.so.0 => /usr/lib/x86_64-linux-gnu/libkrb5support.so.0 (0x00007f91d34c3000)
libicudata.so.60 => /usr/lib/x86_64-linux-gnu/libicudata.so.60 (0x00007f91d191a000)
libbsd.so.0 => /lib/x86_64-linux-gnu/libbsd.so.0 (0x00007f91d1705000)
libblkid.so.1 => /lib/x86_64-linux-gnu/libblkid.so.1 (0x00007f91d14b8000)
libgraphite2.so.3 => /usr/lib/x86_64-linux-gnu/libgraphite2.so.3 (0x00007f91d128b000)
libdatrie.so.1 => /usr/lib/x86_64-linux-gnu/libdatrie.so.1 (0x00007f91d1084000)
libkeyutils.so.1 => /lib/x86_64-linux-gnu/libkeyutils.so.1 (0x00007f91d0e80000)
libuuid.so.1 => /lib/x86_64-linux-gnu/libuuid.so.1 (0x00007f91d0c79000)
(venv) ubuntu@ip-172-31-9-127:~/pycharm/libs/VideoProcessingFramework/install/bin$ source ~/.bashrc
[setupvars.sh] OpenVINO environment initialized
@mfoglio
You are in 'detached HEAD' state.
You're not at the branch ToT. Please pull latest commit:
git checkout v1.1.1
git pull origin v1.1.1
It didn't work, but checking out to the commit id worked. I noticed you both have a branch and a tag called v1.1.1. Maybe that was causing some issue? The code seems to run. I will do more test tomorrow. As always, thank you very much for your support @rarzumanyan !
@mfoglio
I noticed you both have a branch and a tag called v1.1.1
Thanks for bringing this up, I think this is the reason indeed. I've got to learn a thing or two about scheduling releases in GitHub!
Anyway, I was planning to merge to master
before v1.1.1
branch diverges too far away.
So please find the latest changes in master
ToT.
@rarzumanyan , I am still testing your code. Meanwhile I have a question: I think that before I could start a demuxer using self.nvDmx = nvc.PyFFmpegDemuxer(self.proc.stdout)
. Would there be a way to make this available? I am not sure it would work, but maybe it would be possible to read height, width, format and codec from this demuxer. Again, not sure if that would make this information retrieval part more stable, but maybe it's worth an attempt.
Hi @mfoglio
Yes, I've removed that functionality, stream support between C++ and Python turned to be a huge pain the back.
I'd rather parse ffprobe output for that reason rather then support streams (users will inevitably try to use streams for actual demuxing).
One possible way to work this around and get stream properties would be to use PyAV project because parsing ffprobe in my opinion is not a great idea.
@mfoglio
Take a look at this sample: https://github.com/NVIDIA/VideoProcessingFramework/blob/pyav_support/SamplePyav.py It was developed when I was experimenting with PyAV bitstream filters and shows how to get stream properties with PyAV.
hi @mfoglio
I've modified SampleDecodeRTSP.py
in master
branch, it now uses PyAV to get video stream properties, please take a look.
Thank you very much @rarzumanyan . What should I do if in_stream.codec_context.pix_fmt
is equal to yuvj420p
?
I want to decode as many RTSP streams as possible on a single GPU. Since my application is incapable of processing 30 FPS per streams, it wouldn't be an issue if some of the frames would be dropped. I probably won't need more than 5 FPS per streams. I am assuming there could be way to reduce the workload by dropping data at some unknown-to-me step during the pipeline. I would also need to process the streams in real time. When following the PyTorch tutorial from the wiki I found some kind of delay: if I stopped my application for a while (e.g.
time.sleep(30)
) and then I resumed it, the pipeline was returning me frames from30
seconds ago. I would like the pipeline to always return real-time frames. I believe this would also imply using less memory since older data could be dropped. Memory is particularly important for me since I want to decode many streams. I just know the high details of h264 video decoding. I know that P, B, and I frames mean that you cannot simply drop some data and then start decoding without possibly encountering corrupting frames. However, I have encountered before similar issues withgstreamer
on CPU (high CPU usage, more frames decoded then needed, delays and high memory usage) and I came up with a pipeline that was able to reduce delays (therefore also saving memory) while always returning me real-time (present) frames. How can I achieve my goal? Is there any argument I could pass to thePyNvDecoder
? I see it can receivedict
as argument but I couldn't find more details. Here's the code that I am using so far. It is basically the PyTorch wiki tutorial:Any hint on where to start would be really appreciated. This project is fantastic!