Open natowi opened 5 years ago
I am already working on a masking system based on GrabCut
Why use features from the reconstruction? This makes no sense because it does not distinguish between background and foreground features.
Why use features from the reconstruction? This makes no sense because it does not distinguish between background and foreground features.
I do not want to directly use the features for segmentation. I want to (manually) select detected features in one image and transfer the selection to the other images, since we know from ImageMatching the images with corresponding features. So we do not need to manually add control points (fg/bg) for every image in the dataset.
Features selected by lasso in image 1 are also selected in 2+3 (corresponding features), to refine selection we select some missing features in image 3. This way we do not need to select the same areas in multiple images and only need to refine the selection.
I know there are good Foreground Extraction algorithms like GrabCut, but from my understanding they do not support bulk extraction from images. Correct me if I am wrong :)
Please notice that there is already a WIP development for image masking: https://github.com/alicevision/meshroom/pull/708 https://github.com/alicevision/AliceVision/pull/715
@fabiencastan yes, but from my understanding this does not include (bulk) mask generation for images with not-single coloured background
Yes, it does not include generic background removal, but it was just to mention it as this issue looks like the version 2 of this image masking node.
Double-DIP also does image segmentation of foreground and background but I do like natowi's idea as leveraging the information from ImageMatching does sound very robust as it wouldn't have to 're-train' itself for every camera angle.
https://github.com/yossigandelsman/DoubleDIP http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP/index.html
I implemented a simple background subtraction as a meshroom node, see #188, my comment.
You just have to take one clean background image without the object. I added some parameters for dirty backgrounds, but with a clean one it works relatively stable.
Bulk generate image masks using detected features and image matching results for generic background removal
Here is a (rough) idea to automatize object masking to optimize this new feature https://github.com/alicevision/meshroom/pull/708 :
We know from ImageMatching the corresponding features in the other images. How about selecting the Keypoints on our object (simple toolbox) in one image and then applying the selection to the other images (and allow adjustments).
Then we could use a foreground extraction algorithm like this:
(imagine the green dots being selected features)
http://opensource.graphics/how-to-code-a-nice-user-guided-foreground-extraction-algorithm/
http://opensource.graphics/how-to-code-a-nice-user-guided-foreground-extraction-algorithm-addendum/
So we could bulk-create masks with minimal user interaction.
Implementation details:
Once ImageMatching is complete, a new icon in the Viewer could be activated to select features using a simple toolbox.
The Bulk masking tool/helper could be integrated in the main gui (no new node required),
Similar issues: #188 #701