Closed bmegli closed 4 years ago
Problem analysis.
HVE already supports multiple concurrent hardware encoders.
Multiple application level frames may be encoded in a single MLSP frame. It always is the responsibility of the application to interpret the data correctly.
Optimization is possible to avoid unnecessary copies of the data when preparing packets:
Interface change:
Subprogram that streams depth + ir (or anything + anything in general).
HVD already supports multiple concurrent hardware decoders.
Interface change:
Possibly special case implementation for:
Implementation-wise it is easy. Conceptually this further breaks the generic character of NHVD (maintenance, reusability).
Possibly no change:
Before proceeding.
Here I mean for example sending depth and ir frame together:
A similar functionality is provided by containers (e.g. mkv, avi, mp4) with multiple streams:
It is possible to (for example):
However with wireless lossy medium:
From engineering perspective:
Proof-of-concept finished across multi-frame branches in all repositories.
Subjective impressions - works much better, requires lower bitrate, needs more GPU
This needs some serious cleanup before merging.
Finished and merged into master.
Needs some documentation.
Documentation updated.
It would be nice to document new pipeline with video but this is distant future (if at all).
First:
This is the minimum required for next "release".
Refreshing - we are now after "split NHVD to two libraries"
This is finished now.
There are:
Some loosely related improvements are ongoing.
Working textured depth streaming is already implemented (see #2, #4 and video) however encoding is hacky and unoptimal.
Here the idea is to:
The possible gains are:
From depth encoding time benchmark it will only add a few ms of additional latency.
From #4 some of hardware encoding operations may be run concurrently even cutting the few ms latency.