bmegli / hardware-video-streaming

Hardware Video Streaming meta repository
Mozilla Public License 2.0
24 stars 6 forks source link

accelerated depth streaming #1

Closed bmegli closed 4 years ago

bmegli commented 4 years ago

Continuing work from realsense-ir-to-vaapi-h264 issue encoding depth stream where the plan was sketched out.

Extending HVE for HEVC support

Done in HVE ac3a4c1.\ P010LE encoding example was also added.

HEVC Main10 depth encoding example

Done in realsense-depth-to-vaapi-hevc10.\ This configures Realsense to output P016LE Y plane which is directly fed to hardware for encoding as P010LE (binary compatible). Range/precision trade-off can be controlled.

Extending HVD for HEVC support

This is already supported.

Extending NHVE for HEVC support

Extend with new HVE interface for encoder, add synthetic procedurally generated HEVC Main10 P010LE example

Extending RNHVE to support depth streaming apart from color/infrared

Rather straightforward. The only problem is that currently RNHVE uses H.264. Possibly separate repository or configurable codec.

Extending UNHVD for depth data or creating separate project that decodes and feeds point cloud data to Unity

A bit involved to keep performance, framerate and low latency.

Probably:

bmegli commented 4 years ago

Extending NHVE for HEVC support

bmegli commented 4 years ago

Extending RNHVE to support depth streaming apart from color/infrared

bmegli commented 4 years ago

Extending HVD, NHVD, UNHVD for HEVC support

Turns out:

bmegli commented 4 years ago

This is finished already.

Video example of working functionality:

Hardware Accelerated Point Cloud Streaming

bmegli commented 4 years ago

Further improvements were discussed in in librealsense#5799.

A zero-copy pipeline for point cloud streaming/decoding/unprojection/rendering was sketched out:

There are three more things that can be done:

  1. OpenCL unprojection step (hardware accelerated unprojection)

In most cases when hardware decoding HEVC with VAAPI we end up with data on GPU side. We can use OpenCL/VAAPI sharing extensions, namely cl_intel_va_api_media_sharing.

  1. Map decoded VAAPI data to OpenCL (zero copy unprojection).

Finally it should be possible to use OpenCL/OpenGL sharing to map unprojected data to OpenGL vertex buffer which in turn may be rendered with shader.

  1. Map unprojected OpenCL data to OpenGL (zero copy rendering)

Adding those 3 elements we end up with the ultimate zero copy hardware accelerated point cloud pipeline including:

  • decoding
  • unprojection
  • rendering