Closed Vamoss closed 5 years ago
Good point, and unfortunately, it's quite difficult to use motion because point clouds captured from RealSense/Kinect don't have any motion information. They're just a sequence of static point clouds.
It's probably possible to use a body tracking library or an optical flow estimator generate motion vectors of each point, it might be very complicated though. I think I should invest time on more basic usages at the moment.
I did it with OpenCV(FrameDiff), but request the RealSense image from GPU to run in CPU is eating a lot of FPS.
Using skeleton tracking is not a good choice once I am intending to work with crowded audience, and this will cause a lot of inconsistency.
I will probably use this opportunity to learn compute shaders and try to do the FrameDiff on the GPU.
Okay. It seems that you’ve done much more than me on this topic. I’m afraid I have nothing to advice to you. Sorry for not being helpful, and I hope this opportunity goes well with you.
@Vamoss let me know if you got anything. Experimenting with generating velocity buffer from depthmap as well. Would be awesome to trigger Emission blooming with that.
Thank you again @keijiro!
@Podden, I dont know how much experience you have with OpenCV, we used this package: https://assetstore.unity.com/packages/tools/integration/opencv-for-unity-21088
Our approach was using the centroids of the Blob Detection to manipulate a force vector field, them use the force vector to manipulate the particle velocities. Unfortunately the code has a NDA and I can't share it here. Let me know if can help on a specific topic.
I'm closing this issue now. Please feel free to reopen for further questions.
Hi Keijiro, thank you for this amazing project!
If it is not ask too much, could you develop an example where the particles are motion responsive?
Cheers!