uwerat / vnc-eglfs

VNC server for Qt/Quick on EGLFS
BSD 3-Clause "New" or "Revised" License
26 stars 12 forks source link

Video Acceleration (VA) API #14

Open uwerat opened 2 years ago

uwerat commented 2 years ago

Let's see how to make better use of the GPU: https://intel.github.io/libva/index.html

uwerat commented 2 years ago

Did some first attempts for JPEG and got "something" using the old driver ( export LIBVA_DRIVER_NAME=i965 ). But my test scenario is different to the intended solution as transferring the image to a VASurface includes a down/up-load from/to the GPU.

Encoding seems to be more than twice ( including up/download to/from GPU ) as fast than using libjpeg-turbo for an Image of 600x600 pixels. However the colors are wrong and there are some lines shifted, when setting certain values for the quality.

unintialized commented 11 months ago

Would it be easier to use the neatvnc library for hardware acceleration?

uwerat commented 11 months ago

It is not about hardware acceleration in general - most of it is already achieved by the current implementation. The missing part is about using the encoders of the GPU without having to download the rendered frame to main memory before.

As far as I can see ( from a very brief check ) the neatvnc project has an optional dependency for the ffmpeg library and libavutil is used in a file called h264-encoder.c. There is a function "h264_encoder_feed" with a parameter "struct nvnc_fb". I might be wrong, but when looking at this struct it seems to contain a pointer to the frame that is intended to be encoded.

For JPEG the neatvnc project seems to use libturbojpeg. This is usually ( depending on how Qt was built ) also behind the implementation of QImageWriter. So this also used by vnc-eglfs.

So if my quick analysis is correct the implementation of the neatvnc project expects the frames being in main memory ( not one the GPU ) before encoding. But please correct me if I missed something here ...