Closed akashcastelino closed 5 years ago
How does the 'Set position from Map' node get position information from a render texture and which render Texture format must it be?
It simply interprets R, G, B color component values as X, Y, Z component values, so that the render texture format should be RGBAHalf or RGBAFloat to support a wide range of real value.
I am not sure what is being done with the remapBuffer. Why is remapping required?
It's just used to transfer data acquired from RealSense (which is on the CPU memory) to the GPU memory.
Is there a way to directly use the arrays received ( RGB8 color and Vector3 position) arrays into the 'Set Position/Color' nodes of VFX graph or is the only way to bake this data into render textures which can be used in the 'Set Position/Color from Map' nodes?)
You have to transfer data to GPU in any way to utilize GPU. There might be another way to do it (e.g. using compute buffers instead of render textures), but it's not possible to use those managed arrays without transferring.
I'm closing the issue now. Please feel free to reopen for further questions.
Thanks @keijiro for your answers. They were very useful in baking pos/color to the render textures and visualising with VFX graph.
However, at the moment I am experiencing a huge performance drop. I am passing an array to Unity's computeBuffer.SetData function with the pos/color per pixel data.
In your implementation, you have used an unsafeUtility.SetUnmanagedData function and I think this makes the difference in improving performance. But upon looking into the Utility.cs class I see a disclaimer saying "Do not try this at home" :). If this improves performance over the SetData method, apart from it not being future-proof are there any other cons of using this method?
There is no known issue with it, it’s strongly discouraged to use it though.
@akashcastelino Did you ever figure this out? I'm interested in doing something similar. If I have a color image and an array of the depths for those pixels, do I really need a remapBuffer or can I replace it with something else?
I am using the Realsense D435 , but instead of plugging it in and using the Unity package, I am streaming the data from realsense connected with another computer, to the Unity computer. The data I am receiving is the color(RGB8) and position(Vector3) data for each particle in two separate arrays respectively.
I am using the implementation in this repo as a basis for visualising wireless realsense data in VFX graph but I have a few questions
Any help will be really appreciated! Also if someone does understand the code and could add comments it would also be very helpful while trying to reuse and modify it.