strangerattractor / Soundvision_PUBLIC

Max's long long journey into the unity world.
Other
12 stars 1 forks source link

Deploy tracked skeleton info #98

Closed chikashimiyama closed 4 years ago

strangerattractor commented 4 years ago

References: https://github.com/keijiro/Smrvfx https://github.com/keijiro/VfxGraphTestbed

basically what I imagine to be useful is to have a rigged, skinned mesh be available in VFX graph to work with. Since I imagine it to be cleaner in live situations to take the tracking-data from the kinect, rather then the raw depth picture and processing that.

This whole idea of rigged meshes and meshes into VFX graph is also completely new to me, so it would basically helpp if we could use one of kejiros examples and make it work with our setup. For example https://github.com/keijiro/Smrvfx

From my basic understanding, he uses motion vectors from the rigged body and uses those as triggers/spawners for particles.

Also, if we have this rigged mesh in a scene and available in vhx graph, I hope could work with some examples and try to get to the "bodys attracting objects" idea myself.

Spo the basic ACs for this are:

chikashimiyama commented 4 years ago

ok, then this is going to be a big task.

chikashimiyama commented 4 years ago

https://docs.microsoft.com/en-us/previous-versions/windows/kinect/dn799273%28v%3dieb.10%29

This is the actual API. i.e. what you get about the body from the kinect sensor. It's a lot.

Do you want to support multiple bodies in the scene?

strangerattractor commented 4 years ago

two bodies would be great, so we can seamlessly use it for CYLVESTER too. How much bigger will the task get because of this?

chikashimiyama commented 4 years ago

@strangerattractor

Analysis of Keijiro's example

Mesh he is using Body mesh that consists of ca' 46000 vertices. 33 Bone joints are inside.

Animation The animation is motion captured with probably 33 sensors.

Bone to Mesh Animation controls bone, and mesh is moved according to bone using Skinned Mesh Renderer.

Basic data for physical simulation in compute shader He is using custom compute shader (a trick to let GPU executes mathematical calculation instead of GPU) to fill three buffers, namely PositionMap, VelocityMap, NormalMap from the modified mesh by Skinned Mesh Renderer.

VFX He is utilizing the computed three buffers in compute shader in VFX graph.

This is a combination of very high-end techniques

chikashimiyama commented 4 years ago

The most impossible part is controlling bones/joints using Kinect.

https://youtu.be/h93HDV4NjO8

It's impossible to achieve this level of smoothness using an optic sensor. Only mocap can do that.. Some joints are missed if it is hidden by other parts of the body.

Another problem is I have no idea how to map kinect joints (25) to the bone of this avatar (33). It's going to be a lot of work but I don't expect fascinating result.

chikashimiyama commented 4 years ago

https://answers.unity.com/questions/1453641/how-to-control-a-rigged-body-with-kinect.html

strangerattractor commented 4 years ago

My idea was that if you use a vfx triggered by movement of the bones, you would get away with "glitchy" movements of misscalculated body parts. Basically we would use the skin of the model to emit particles but would hide the model itself. But I understand what you mean... I checked also some of his demos, and most of the time he uses baked data from a model That he imports from hudini for example.

But still, he has demos, where he uses real time data from kinect 2 or azur in vfx graph...

But when I now look at it it's mostly depth data....

strangerattractor commented 4 years ago

But probably my idea was very naive. Nevertheless I'll be able to use the bone positions for vfx graph. There are good youtube tutorials on it. I'll check it asap

strangerattractor commented 4 years ago

https://www.youtube.com/watch?v=3EsITXwhlF4

attracting towards one vector3 is easy. But I need a math guy to tell me how to attract towards many points, or closest points....

chikashimiyama commented 4 years ago

attracting towards one vector3 is easy. But I need a math guy to tell me how to attract towards many points, or closest points....

It's actually not so difficult. but I would say not very easy. Keijiro used compute shader for preparing data for the VFX.

https://docs.unity3d.com/Manual/class-ComputeShader.html

chikashimiyama commented 4 years ago

My idea was that if you use a vfx triggered by movement of the bones, you would get away with "glitchy" movements of misscalculated body parts. Basically we would use the skin of the model to emit particles but would hide the model itself. But I understand what you mean... I checked also some of his demos, and most of the time he uses baked data from a model That he imports from hudini for example.

I don't think you can hide all glitches if you hide models.

chikashimiyama commented 4 years ago

What we are doing is a little bit pointless because using skinned mesh renderer for a bone means you make a point cloud from a bone so it means kinect detects human figure in the captured image (depth image) -> extract bones -> we put fresh on the bones (Skinned mesh renderer) -> we make a puppet that imitates our movement -> but we don't show it because it is glitchy -> we use it as point cloud though kinect gives already point cloud-> we just use the movement of each point and generate particles...

I guess something is wrong. It's a convoluted process. The only advantage of using skinned mesh renderer instead of point cloud is that you can trace the movement of each point, thus you can make the example like Keijiro's demo. I cannot see any advantage other than that.

chikashimiyama commented 4 years ago

@strangerattractor

I think I warned you enough. If you want to continue, I'm fine with that. but I need a human body mesh with rigs. I'm currently using this model https://assetstore.unity.com/packages/3d/characters/renderpeople-free-rigged-models-95860 but if you want to use another model. please let me know.

The number of joints that the 3D mesh provides and that of kinect detected bones are sometimes different, so we cannot replace mesh easily (I have to connect kinect-detected-bones and the joints of 3D models "manually" in the script.)

chikashimiyama commented 4 years ago

Kinect doesn't provide bone position in usual 3D scene, we need convert that somehow.

https://medium.com/@lisajamhoury/understanding-kinect-v2-joints-and-coordinate-system-4f4b90b9df16

strangerattractor commented 4 years ago

Ok let me think for a moment. You are correct of course.

Chikashi Miyama notifications@github.com schrieb am Sa. 2. Nov. 2019 um 11:37:

Kinect doesn't provide bone position in usual 3D scene, we need convert that somehow.

https://medium.com/@lisajamhoury/understanding-kinect-v2-joints-and-coordinate-system-4f4b90b9df16

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/chikashimiyama/Soundvision/issues/98?email_source=notifications&email_token=AMQQNRTTD5INZGCOHONJLPTQRVJ7JA5CNFSM4JHGVW42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC4Y4JY#issuecomment-549031463, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMQQNRSRTARH736EOD3DNOTQRVJ7JANCNFSM4JHGVW4Q .

chikashimiyama commented 4 years ago

The advantage of kinect skelton detection is you can track the position of body parts like head, hand, foot. but to use that as a skelton of an avatar is very challenging. because the avatars sold on the market are basically not designed for being controlled by Kinect (If I move the joints using kinect data, it's clumsy)

The way this guy shows is kind of a hack. https://answers.unity.com/questions/1453641/how-to-control-a-rigged-body-with-kinect.html

but with Avatar IK only let me control 2x hands and 2x feet the rest is "inferred". I think we should not touch Kinect -> Avatar by ourselves. If there is an asset, just grab it. Otherwise, it takes too long time.

chikashimiyama commented 4 years ago

The asset you mentioned before might be helpful to connect Kinect-> Avatar

https://rfilkov.com/2014/08/01/kinect-v2-with-ms-sdk/

Obviously, we need to carefully integrate this to our kinect system

chikashimiyama commented 4 years ago

Another idea is to forget about the skins. we can trace the movements of 25 human joints using kinect v2 if the condition is perfect and track the points chronologically. We can use that directly as the emitter of particle for example.

Or we can connect joints with lines (basically these are bones) and we can use these bones as particle emitters

If we don't show mesh, the idea of using skins is not extremely necessary, though it realizes the attraction of Seijiro's demo.

to use joints as attractor is easy (1 day task) , bones are also easy (2 days task) I would say.

strangerattractor commented 4 years ago

Ok let’s drop the mesh on bones topic.

Ok then let’s use bones and joints as attractors. Let’s use vfx graph as the emitter. Let’s expose colors, forces and particle emission counts at least. Let’s make motion of bones or joints be able to trigger particle emission as well as sound.

Could you write this into tasks please? I have to give the workshop and only have a 5 Minute break. 😬

How much time do you think you need to make depth texture from Kinect usable in vfx graph like in the demos?

Chikashi Miyama notifications@github.com schrieb am Sa. 2. Nov. 2019 um 14:03:

Another idea is to forget about the skins. we can trace the movements of 25 human joints using kinect v2 if the condition is perfect and track the points chronologically. We can use that directly as the emitter of particle for example.

Or we can connect joints with lines (basically these are bones) and we can use these bones as particle emitters

If we don't show mesh, the idea of using skins is not extremely necessary, though it realizes the attraction of Seijiro's demo.

to use joints as attractor is easy (1 day task) , bones are also easy (2 days task) I would say.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/chikashimiyama/Soundvision/issues/98?email_source=notifications&email_token=AMQQNRVMN7HQDQR5LBDPFSDQRV3BHA5CNFSM4JHGVW42YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEC43NIY#issuecomment-549041827, or unsubscribe https://github.com/notifications/unsubscribe-auth/AMQQNRRWWNP3GF6DREXBWD3QRV3BHANCNFSM4JHGVW4Q .

chikashimiyama commented 4 years ago

How much time do you think you need to make depth texture from Kinect usable in vfx graph like in the demos?

It's not so easy to calculate but probably 5-6 hours. but I have to many things to clean up the backend.

chikashimiyama commented 4 years ago

Made four different subtasks