FlantasticDan / mocapMath

Motion capture is expensive and complex, but what if it wasn't?
GNU General Public License v3.0
11 stars 3 forks source link

Video examples #9

Open Dene33 opened 5 years ago

Dene33 commented 5 years ago

Hi, your projects seem very interesting. Is there any video showing all the process? Thanks.

FlantasticDan commented 5 years ago

Hello!

It's awesome to see some interest in my work so early on. I have a data set I've been using as a proof of concept for the geometric equations. I'll throw together a demo when I have some time this weekend and update this issue.

Cheers, Daniel

Dene33 commented 5 years ago

I think the following link can be interesting for you: https://github.com/nerk987/triangulate

FlantasticDan commented 5 years ago

Yeah! I saw that you had forked that last night. It seems nerk987 and myself have taken the same data inputs, approached the math differently, only to get similar outputs. If I had found this a week ago I would have probably saved myself a ton of time.

Where we differ is the Blender integration, ultimately I want to build a toolkit that is application agnostic so it can be implemented in a variety of VFX pipelines. For the time being that looks like a set of application-specific import/export scripts and a more centralized "solver."

I've started with Blender because it's got a built in tracker, solver, and 3D environment with a relatively convenient python interface but ultimately I'd like to be able to use something like Mocha or After Effects as a tracker, Cinema 4D or NukeX as a solver, and Maya or Motionbuilder as an injester.

The work being done with OpenCV has me thinking that a lot of the more manual tasks could be automated towards a one click solution. Who knows though, this is a side hobby project for me right now.

FlantasticDan commented 5 years ago

Here is the demo I was talking about, it was made using the latest commit to master. For time's sake I only tracked the joints on the actor's left side (screen right). The top is a screen capture of the trackers on the footage and the bottom is a the rendered cubes tracked to the calculated points in 3D space, viewed through the solved camera.

mocapMath Demo

Dene33 commented 5 years ago

Trackers placed manually?

FlantasticDan commented 5 years ago

Sorta, the tracks were placed and labeled manually, but blender was able to track most of the clip automatically without much correction. I've been unable to conceptualize a method that will fully automatically track and identify which points go together across the two camera angles. Even though, as I see it manually tracking each joint from each camera angle is still less time consuming than the alternative which would be animating by hand to reference footage.

My best idea at the moment is to use unique tracking markers for each joint and somehow implement computer vision (likely through OpenCV) to identify and track them.

Dene33 commented 5 years ago

You can track joints with openpose or other 2D pose estimation realizations.

FlantasticDan commented 5 years ago

My concern there is that with pose estimation the position is an approximation whereas a tracker gives exact screen-space position.