Closed billamiable closed 6 years ago
It's a basic object reconstruction pipeline based on background subtraction. The background subtraction is tuned for the Kinect. Essentially, you run a world scene and an object scene. First, you reconstruct the world and stop fusion, then you put the object into it, then you diff the current depth against a synthetic depth rendering of the world, then you feed the diff to another fusion pipeline that fuses into the object scene. Tracking can be done either against the world (if the object is stationary) by mirroring the pose to the other pipeline, or against the object (i.e. tracking using the masked input images). In practice, tracking against the object doesn't work terribly well - more work would be needed to make it usable. Tracking against the world works ok, but it can be tricky to get round the back of the object to construct a complete object. TL;DR: It would need more work to be genuinely interesting :)
There's also a simple bit of code to separate the user's hand from the object, if you're doing handheld object reconstruction - it's based on colour appearance, and is a bit hacky, but works every now and then...
When looking at the code, i observed that there is an option of args.pipelineType == "objective", i wonder what is it used for? Thanks!