keijiro / Rsvfx

An example that shows how to connect RealSense depth camera to Unity VFX Graph
Other
734 stars 111 forks source link

question and not an issue #30

Closed amir989 closed 5 years ago

amir989 commented 5 years ago

Great work, im working on something similar but with a little bit different approach. questions: 1- In the computeShader code, what value the RemapBuffer is holding? i couldnt understand SetUnmanagedData() function!! could you please explain it a bit?

2-I cant use VFX graph in my project. so I have to do everything in computeShader. my problem is doing math for pointcloud position while im using tracking camera! So, i need to read the depth(position) from depth map, add tracking camera position to them and then get the final data on CPU. adding depth info and tracking is where my problem is. Can you help me on that or refer me to somewhere that i can understand the math please?!

Cheers

keijiro commented 5 years ago

In the computeShader code, what value the RemapBuffer is holding?

It has texture coordinates for each point in the point cloud.

https://intelrealsense.github.io/librealsense/doxygen/rs__frame_8h.html#a095b89895e9bd979bf77dd928ea707f2

Can you help me on that or refer me to somewhere that i can understand the math please?!

Sorry but please understand that this issue tracker is dedicated to solve problems with this specific project -- not a place for consulting other projects.