elsampsa / valkka-core

Valkka - Create video surveillance, management and analysis programs with PyQt
GNU Lesser General Public License v3.0
181 stars 35 forks source link

Decoding not called and resource temporarily unavailable while H.265 4K@25Hz streams input #22

Closed LuletterSoul closed 3 years ago

LuletterSoul commented 3 years ago

Hi, I test lesson_4_a.py in valkka-examples to test share memory filter streaming on a 4K video stream. When I change the original rtsp url as my video source, I run the script and all packages are imported normally. But error is printed as below:

    OpenGLThread: loadExtensions: no swap control: there's something wrong with your graphics driver: Resource temporarily unavailable
    OpenGLThread::getSwapInterval: could not get swap interval
    OpenGLThread::setSwapInterval: could not set swap interval
    Created new TCP socket 6 for connection
    bye

In the other way, I also run lesson_2_a.py to test valkka effectiveness simply. But it don't show any stream information. And it only prints:

  Created new TCP socket 6 for connection

It seems av_thread don't execute decoding process. While I change video source to a public rtsp url, it prints all logs as you mentioned in comments of script. So what do I make a mistake, or valkka don't support H.265 4K streams itself ?

elsampsa commented 3 years ago

Sorry, no H265 supported at the moment!

I guess you are viewing your video at an 55 inch screen and/or need to analyze millimeter-sized details in your video since you need 4K. :)

Here are some considerations: https://elsampsa.github.io/valkka-examples/_build/html/decoding.html#multithread

LuletterSoul commented 3 years ago

@elsampsa Any plans to add H.265 decoding in the future? I remember that H.265 is originally supported by ffmpeg library. I need to integrate your amazing valkka into our object detection system. The multiprocessing share memory functionality of video frames is what I needs in this framework, we don't need to view my video at an 55 inch screen actually. :)

elsampsa commented 3 years ago

There are quite many - actually too many - plans for the future. There are so may ways / directions to take this framework into (adding different codecs, adding sound, making valkkafs usable, adding hardware decoding, etc.) & only a limited time to do them. At the moment I'm not using H265 for any of my libValkka-based projects, so it's not a high priority.

Can't you get H264 from your cameras? Why not use that instead? I would say that H264 and 1080p running at max. 15 fps is enough for 99% of video streaming / machine learning applications out there. Camera manufacturers are pushing H265 and 4K at god-knows-what-60+fps since they are not making any buck from vanilla IP cameras anymore.

H264 has the additional upsize that it is understood by all modern web-browsers, so if you're streaming from your IP camera into a web-browser, there's no need to transcode (just remux).

And there is also the possibility that you get involved and include the H265 decoding into libValkka. :)

elsampsa commented 3 years ago

One more solution if you insist in using H265 (or any other thing not accessible natively by libValkka).

You can create a multiprocess that reads your (H265) source (camera, etc.) using, for example, OpenCV.

Then, in that multiprocess, create a custom shared memory server(s) [this is part of libValkka API].

Then create "client" multiprocesses that create shared memory client(s) listening to the sharedmem server(s).

Of course, be carefull in creating the servers before the clients and closing the clients before garbage collecting the servers, etc. A good example for all that can be found here

LuletterSoul commented 3 years ago

@elsampsa Thanks for your outstanding solutions, bro !

  1. Why do I have to use H265 video streams ? It's required by project input sources, we cannot control video encoding server, so I have to adapt my code for this application.

  2. Why do I have to use 4K@60hz video streams? You are definitely right that 15fps is enough for our machine learning applications, but it requires that a serial of high quality video clips to be generated during model detection. I have to save a batch of original video frames to complete generation task without image quality compressed.

  3. The opencv decoding function is cost of CPU resources without GPU hardware acceleration. So I will prepare to take into Nvidia GPU acceleration SKD into my project. If you are interested this feature, you can find related implementation here.

BTW, you have provided a very good solution to intercommunicate real time video frames in different multiprocesses. However, if some video analysis tasks are running on different machines, the shared echanism of single machine won't work anymore. It there exists a distributed design without lossing real-time property?

elsampsa commented 3 years ago

It there exists a distributed design without lossing real-time property?

I guess you are talking about the very RTSP protocol itself. :)

I actually want to include both nvidia and huawei ascend hw acceleration into libValkka, most likely as a separate module. This project would consist of subclassing AVThread, first to decode H264 and after that H265. Not sure when I will have time to do that though.

ashuezy commented 3 years ago

@elsampsa What do you think is the most efficient library to decode RTSP stream(both H264 or H265)? Least CPU usage.

elsampsa commented 3 years ago

This starts to be OT for libValkka issues. If you want, mail me & we can continue via email.

elsampsa commented 3 years ago

FYI: there is now a separate extension module for doing decoding with nvidia cuda. Now H265 though yet..