bmegli / realsense-network-hardware-video-encoder

Realsense hardware encoded color/ir H.264 and color/ir/depth HEVC streaming
Mozilla Public License 2.0
23 stars 3 forks source link

Multiple RealSense and Point Cloud Color #13

Closed fajarnugroho93 closed 4 years ago

fajarnugroho93 commented 4 years ago

Hi,

I am able to stream point cloud data to a Unity receiver from your other repository.

I have some questions:

  1. Is the current implementation able to stream point cloud data from multiple RealSense cameras?
  2. Is the point cloud streaming unable to stream color for now? I am able to stream textured point cloud with D435 but the color is greyscaled.

That is all. Thank you for your remarkable work.

bmegli commented 4 years ago

Hi @fajarnugroho93,

  1. Is the current implementation able to stream point cloud data from multiple RealSense cameras?

The only thing holding it is RNHVE not able to distinguish Realsense cameras. Once you add such mechanism (by serial, by physical port) you may use multiple instances and stream to different ports.

On receiver side it should be a matter of duplicating PointCloud game object and configuring different port and location.

  1. Is the point cloud streaming unable to stream color for now? I am able to stream textured point cloud with D435 but the color is greyscaled.

The short answer is it can't yet.

So there are two things to make it work:

Kind regards

bmegli commented 4 years ago
  • align color to depth (moderately complex, needs some computation)

Ok, this can actually be made in two ways

I though about receiver side, my computational resources are limited on sender side.

It is a lot easier to do it on sender side with librealsense

One thing to watch here would be the time budget (e.g. at 30 fps you have around 33 ms to process, encode and send data to keep up with the framerate). Not forcing highest quality encoding for depth encoding would help (currently hardcoded, just constant), especially is you use low power SBC on sending side.

fajarnugroho93 commented 4 years ago

The only thing holding it is RNHVE not able to distinguish Realsense cameras. Once you add such mechanism (by serial, by physical port) you may use multiple instances and stream to different ports.

On receiver side it should be a matter of duplicating PointCloud game object and configuring different port and location.

Ah, I see.

Ok, this can actually be made in two ways

  • sender side (through libreasense)
  • receiver side (own code)

I though about receiver side, my computational resources are limited on sender side.

It is a lot easier to do it on sender side with librealsense

  • one should be able to use librealsense
  • to align color data to depth data
  • and encode already aligned data

Hmm, I do not really understand this now, but I will look into it.

Thank you.

fajarnugroho93 commented 4 years ago

So there are two things to make it work:

* stream depth + color (simple)

I am currently trying this.

* the framework may already encode and transport depth + color

  * there is no such example now

    * it is a matter of moving some code
    * from other example
    * to depth_ir example (color instead of ir)

Please correct me if I am wrong, so I am creating a new class based on rnhve_depth_ir.cpp and change the ir frame to color. Then, I need to fix the unprojection and alignment on the UNHVD, right?

bmegli commented 4 years ago

Working example

Please correct me if I am wrong, so I am creating a new class based on rnhve_depth_ir.cpp and change the ir frame to color. Then, I need to fix the unprojection and alignment on the UNHVD, right

You could do it this way but this is the hard way (you would have to implement alignment yourself).

See working proof-of-concept in depth-color branch.

e.g. git pull followed with git checkout depth-color and build as usual

You will also have to update UNHVD

e.g. git pull, then also uncomment this depth_config instead of previous

Notes

Alignment direction

The decision whether to align depth to color or color to depth has important consequences.

For D435 infrared (and depth) has wider FOV than color (the color data doesn't cover all ir (depth) data).

In the example I aligned depth to color, this means that:

Intrinsics

Note that the example outputs intrinsics (camera model) for both depth and color

Ideally you should replace this depth config with data output by your camera (I have no idea whether it differs or not).

Note that intrinsics change with resolution.

Resolution

The resolution you use for depth and color has impact on the result.

For simplicity in the example I left the same resolution for both.

As far as I remember for Realsense FOV may change with resolution. This means that at 848x480 it may be even more narrow for color sensor.

Resource use

Alignment will probably use some of your CPU power. All other examples in this repository don't (no data processing, apart from passing to hardware encoder and sending over UDP).

Bitrate

This will probably need more bandwidth than depth + ir

Summary

This is working proof of concept, not the whole solution. E.g. unanswered questions like alignment direction, resolution to use, how it affects FOV.

fajarnugroho93 commented 4 years ago

I see, I will try it.

Thank you very much for your thorough explanation and notes. I really appreciate it.

fajarnugroho93 commented 4 years ago

It is working!

I cannot comment yet regarding the notes because currently it is good enough for my scene, but I will report back in case I find something interesting.

Thank you very very much! Good luck for your future endeavors.

bmegli commented 4 years ago

Ok @fajarnugroho93!

I am just saying that it may be possible to squeeze more from the hardware.

See HVS#8 if you are interested. (this is where long-term effort will be coordinated).