This program is example how to use:
See benchmarks on wiki for CPU/GPU usage.
See how it works on wiki to understand the code.
See hardware-video-streaming for other related projects.
See video for wireless point cloud streaming example.
This program uses video codec for depth map encoding. It will not work perfectly.
Consider HEVC 3D extension software encoder if you are not concerned with:
Unix-like operating systems (e.g. Linux). Tested on Ubuntu 18.04.
Tested with D435 camera.
Program depends on:
Install RealSense™ SDK 2.0 as described on github
HVE is included as submodule, you only need to meet its dependencies (FFmpeg).
HVE works with system FFmpeg on Ubuntu 18.04 and doesn't on 16.04 (outdated FFmpeg and VAAPI ecosystem).
Tested on Ubuntu 18.04.
# update package repositories
sudo apt-get update
# get avcodec and avutil (and ffmpeg for testing)
sudo apt-get install ffmpeg libavcodec-dev libavutil-dev
# get compilers and make
sudo apt-get install build-essential
# get cmake - we need to specify libcurl4 for Ubuntu 18.04 dependencies problem
sudo apt-get install libcurl4 cmake
# get git
sudo apt-get install git
# clone the repository (don't forget `--recursive` for submodule!)
git clone --recursive https://github.com/bmegli/realsense-depth-to-vaapi-hevc10.git
# finally build the program
cd realsense-depth-to-vaapi-hevc10
mkdir build
cd build
cmake ..
make
# realsense-depth-to-vaapi-hevc10 <width> <height> <framerate> <depth units> <seconds> [device]
# e.g
./realsense-depth-to-vaapi-hevc10 848 480 30 0.0001 5
Details:
If you have multiple VAAPI devices you may have to specify Intel directly.
Check with:
sudo apt-get install vainfo
# try the devices you have in /dev/dri/ path
vainfo --display drm --device /dev/dri/renderD128
Once you identify your Intel device run the program, e.g.
./realsense-depth-to-vaapi-hevc10 848 480 30 0.0001 5 /dev/dri/renderD128
Play result raw HEVC file with FFmpeg:
# output goes to output.hevc file
ffplay output.hevc
You will see:
realsense-depth-to-vaapi-hevc10 and HVE are licensed under Mozilla Public License, v. 2.0
This is similiar to LGPL but more permissive:
Like in LGPL, if you modify the code, you have to make your changes available. Making a github fork with your changes satisfies those requirements perfectly.
Since you are linking to FFmpeg libraries. Consider also avcodec and avutil licensing.
The next logical step is to add texture to the depth map.
RNHVE already does that. If you are interested see: