alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.25k stars 1.09k forks source link

Image masking #188

Open robertguetzkow opened 6 years ago

robertguetzkow commented 6 years ago

I'd like to suggest image masking as a feature for a future version of Meshroom. In case this would be implemented, masked areas should not be used for keypoint detection nor for depth maps and texture generation. A simple import of masks in the form of binary images would be enough, a build in editor is not necessary.

Thank you for developing this great photogrammetry software!

finnbear commented 6 years ago

@robertguetzkow Can I ask how you would create your binary masks? I don't know how I would do that right now so I would want a built in editor.

octupe commented 6 years ago

You could just paint in some rough mask per image in GIMP or photoshop and save it as a PNG using the image name it applies to with a predetermined extension or suffix.

robertguetzkow commented 6 years ago

@finnbear exactly as @octupe described. Sorry been on vacation, that's why I didn't respond sooner. GIMP, Krita, Affinity Photo, Photoshop or any other image editing software that allows to use layers.

fabiencastan commented 6 years ago

I agree that this would be a good feature.

In the meantime, a simple workaround is to apply your mask on the input pixel (RGB=0). There is no drawback in doing that. You just need to be careful to keep your image metadata (if your image editing software doesn't preserve metadata, you can transfer metadata from one image to another with exiftool).

robertguetzkow commented 6 years ago

@fabiencastan wouldn't that workaround select edges in the RGB=0 area as features, therefore possibly mismatching images when masking multiple objects/areas?

Yes, metadata needs to be preserved for focal length calculation. Exiftool is very useful if you need to manually add, edit or copy exif data. I definitely recommend it.

fabiencastan commented 6 years ago

Yes sure but in practice it works quite fine. I have always used soft-edges on my masks.

natowi commented 6 years ago

I think a masking node could be based on G'MIC.

ChemicalXandco commented 6 years ago

If there was to be built-in masking, a hierarchical agglomerative clustering procedure that expands the background and foreground regions from the initial strokes seems to be the best solution.

zfarm commented 6 years ago

Sorry Im a bit new. Does this entail pulling each photo into an editor and painting out what isnt necessary for the scan? I agree a bounding volume you could define once the point cloud was generated would save a lot of time .

robertguetzkow commented 6 years ago

@zfarm yes that was my idea. It has the benefit that there are already plenty image editors out there that are stable and it would greatly reduce the necessary work to integrate this feature. Integrating a mask editor into Meshroom seems overkill to me.

natowi commented 6 years ago

@robertguetzkow Of course you can apply masks to a few images or paint out unwanted areas by hand - but what about datasets with 100+ images? GMIC can extract the foreground and create reject masks - but not in bulk. I was thinking more of Image Segmentation like this to automatically identify and select the object in multiple images and create masks.

@zfarm Yes, a bounding volume/Mask (as a tool in the 3D Viewer?) for the point cloud would be nice. Maybe PCL (->github) could be useful here.

robertguetzkow commented 6 years ago

@natowi sure I can think of several more automated and sophisticated approaches, but somebody has to implement it as well. I was just being pragmatic. Having any masking at all would be an improvement. In case somebody wants to try your segmentation approach, I'd highly suggest to not rely on automated processing alone. There should always be a way to make manual adjustments to the masks.

natowi commented 6 years ago

I'd highly suggest to not rely on automated processing alone

@robertguetzkow I agree When I have some time, I´ll look for a suitable library or tool we could use for this task and think about a way to integrate it in the Meshroom workflow. (This is a good starting point)

jumpjack commented 5 years ago

Have a look to 3df Zephyr "Masquerade" plugin, which you can start from inside the program right after importing images at a click o f abutton: it makes you draw on first image which parts to take into account and which to ignore, and then you can propagate same mask to all images with one click. Very useful if you have fixed obstructed area on your images.

natowi commented 5 years ago

@jumpjack I am looking for suitable open-source projects. Here is a list I started.

jumpjack commented 5 years ago

I was talking about the idea: an "image mask editor" embedded in Meshroom rather than an external program.

natowi commented 5 years ago

@jumpjack Yes, I know. That´s why I am looking for an open-source project with a compatible license and good usability/results which could be merged with Meshroom, as the main devs have enough to do (I think) with working on Meshroom core features.

Misterdudeman commented 5 years ago

I'd be interested in something that could accept an alpha channel from a png or tiff as a form of mask input.

MightyBOBcnc commented 5 years ago

I would be happy with importing masks from an external program (e.g. GIMP, Photoshop) as black and white images or as alpha channel from a png, tiff, tga, etc..

Example image: https://i.imgur.com/ZNrztz1g.jpg Example mask that could be a stand-alone image or an alpha channel: https://i.imgur.com/UDHvlKVg.jpg

An embedded editor is not strictly necessary (although I wouldn't object to having both features).

natowi commented 5 years ago

Reference to Google Groups: Deeplab based masking

ALfuhrmann commented 5 years ago

I wrote a simple script which masks the filtered depth maps. It performs the final masking step, using masks generated externally (using Imagemagick at the moment).

image

At the moment, this is just something which I run from the commandline, but I would be willing to convert it to a meshroom node if someone could give me some pointers how to do this.

I've looked into the node definition files, but some kind of short introduction or description of the process would be really helpful, otherwise it's trial and error for me.

ALfuhrmann commented 5 years ago

Never mind, I figured it out from the sources.

I now have a node-based workflow for masking, see #566

zfarm commented 5 years ago

Amazing! this will be helpful. Do you know if its going to be in the next build or.. is there a way to implement it?

On Tue, Sep 3, 2019 at 8:47 AM ALfuhrmann notifications@github.com wrote:

Never mind, I figured it out from the sources. See #566 https://github.com/alicevision/meshroom/issues/566

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/alicevision/meshroom/issues/188?email_source=notifications&email_token=ACERY54HVV4NEH3VYPKYWBDQHZMHBA5CNFSM4FPLZRDKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD5YCHLA#issuecomment-527442860, or mute the thread https://github.com/notifications/unsubscribe-auth/ACERY54RQ6Y3Q6DL7KPZZ7DQHZMHBANCNFSM4FPLZRDA .

-- Sam Gebhardt

ALfuhrmann commented 5 years ago

I posted this on the other thread:

Here is everything you should need for testing:

masking_patch.zip

Copy the "lib" folder from the ZIP in your meshroom root folder, merging it with the "lib" in there. At the moment it is only tested with Meshroom-2019.1.0-win64, but it probably works with 2019.2.0, too.

The ZIP also contains a sample Meshroom graph "Meshroom-2019.1.0-win64", which shows the necessary nodes and connections.

So you need only to drop your images in Meshroom, and put the folder where your masks are in the "DepthMaskFilter/Mask Folder" attribute. Masks are named like the source image they belong to, and have to be grayscale .PNG files. Every pixel with value 0 (black) is masked (see here).

If you try this, please post your experience over there: #566.

At the moment, this is only my solution for a specific problem I have. If other people can successfully use it, I am going to submit a pull request to meshroom. No idea if or when this is going to be in the distribution.

natowi commented 4 years ago

I found this paper: Perspective-consistent multifocus multiview 3D reconstruction of small objects. They generate their masks using two images, one in normal front lighting and one with a strong back light. This method can be useful for turntable setups.

ALfuhrmann commented 4 years ago

This would work fine, but requires twice as many images to be taken.

I have tentatively implemented a "MakeMaskFilter" for meshroom, which uses background subtraction to generate masks. Additionally I look for a continuous silhouette by only filling the mask from the image corners inward, but this only works because my objects to not have holes in them.

A complete pipeline now looks like this: image

When I have the time I am going to upload this here.

fabiencastan commented 4 years ago

@ALfuhrmann It could make sense to integrate background subtraction option into https://github.com/alicevision/AliceVision/pull/715. Currently there is only one algorithm implemented for HSV keying, but the option is already there to add other strategies. Would you be interested to make such contribution?

In the PR, the node has an option grow/shrink for post-processing the mask at the end. The FeatureExtraction node has a new input "masksFolder" to retrieve features only in the selected areas. The PrepareDenseScene has also a new option "Masks Folders" to only compute the depth map in the selected areas, which reduce the computation time.

ALfuhrmann commented 4 years ago

I'd have to take a look at #715 first. Since I am lazy, my nodes are python only, I would have to change to C++ for a usable implementation. I'll look into it over Christmas.

fabiencastan commented 4 years ago

If I have enough time, I may be able to integrate it. Could you share your python script somewhere?

ALfuhrmann commented 4 years ago

Sure: https://pastebin.com/rFf14QUY

This has worked fine for me, but I am sure there is lots of room for improvement.

It even has some comments(!) but message me if you need additional information

Sazoji commented 4 years ago

google camera has an automated depthmap and subject filtering feature stored in the exif data (lens blur and certain versions of portrait mode respectively) the subject filtering isnt that great (even when compared to opencv) but its free to implement in photo captures. Could that data be used directly or would I have to go into gimp, extract the data as a separate image, then remove the exif "depth"(mask, no depth data) for use in a script?

ALfuhrmann commented 4 years ago

google camera has an automated depthmap and subject filtering feature stored in the exif data

I would have to look into this. Could you upload an example somewhere, or post it here?

Sazoji commented 4 years ago

I cant seem to replicate it, moved to android 10, but here is a photo I took at the time with all the data, I also think it has a burst mode or something due to the file size. 00100sPORTRAIT_00100_BURST20190701125144412_COVER-depthmap

00100sPORTRAIT_00100_BURST20190701125144412_COVER

natowi commented 4 years ago

bokeh effect: https://github.com/panrafal/depthy for google camera (also http://stereo.jpn.org/kitkat/indexe.html, http://stereo.jpn.org/kitkat/gcamera001.zip) and https://github.com/designer2k2/depth-map-extractor for huawei. Actually embedded depth maps. Apple has something similar https://developer.apple.com/documentation/avfoundation/avportraiteffectsmatte/extracting_portrait_effects_matte_image_data_from_a_photo https://www.raywenderlich.com/314-image-depth-maps-tutorial-for-ios-getting-started

Extract using ExifTool

Sazoji commented 4 years ago

Alright, google camera uses lens blur mode to only approximate depth, I wish my oneplus 6t would let me extract any depth data from the sensor, all portrait effects using the depthsensor need to be done within the camera pre-shot and arent included in the data (also permanently modifies the original photo with simulated depth unlike gcam) it seems that the depth approximation algorithm changed, and the 2014 playstore google camera's algo is different than my old gcam's lens blur/portrait mode,

using the portrait mode's algo and "depth maps" for that data looks usable for subject selection, is there any info on what computer vision code they are using for the pixel1-3 range depth modes?

natowi commented 4 years ago

@mjc619 did you try extracting the "portrait mode" image using one of the listed tools? If one works for you it is easy to write a simple node to add this tool and use it before the masking node.

nyersa commented 4 years ago

Has there been any movement on this issue? I currently use a turntable with a green screen that is automatically masked using imagemagick. I would love to be able to move to something like meshroom but without a masking feature of some sort it is not currently practical...

fabiencastan commented 4 years ago

Yes, there is new implementation for that https://github.com/alicevision/meshroom/pull/708. But it's not yet in the "develop" branch.

nyersa commented 4 years ago

Excellent, thanks!

TRex22 commented 4 years ago

If anyone is interested Ive been working on building out some custom nodes including masking with resnets from torch-vision and Ive had some success with the supported categories on input images

natowi commented 4 years ago

@TRex22 Yes, that is really interesting. How do you include the torch dependencies? Do you import torch as external dependency in the node or did you find a solution to create a stand-alone torch build with the resnet model that does not require installation? I have been testing different DL masking and 3d reconstruction tools (discussed here https://github.com/alicevision/meshroom/issues/528), but an issue is the difficulty to build a standalone solution. (Torch looks like a good framework in my opinion, as it is possible to quickly bundle pytorch scripts to executables for testing and a model converted to torchscript can be run with c++ libtorch and a python interpreter so it would perfectly fit with Alicevision c++ and Meshroom python. Many other frameworks are targeting online deployment or do not support multiple platforms)

TRex22 commented 4 years ago

Yeah right now its just a global dependency but I have been looking at ways of packaging it all up. One thing I want to do is work on a more user friendly experience when importing custom nodes ... like a plugin system where you give it a zip and the ui handles a bunch for you

TRex22 commented 4 years ago

But havent yet posted on the mailing group. I want to refine what I have and figure out exactly what my plan is for implementing such as system

andytany commented 3 years ago

Can I add a mask before featureExtration?

natowi commented 3 years ago

@andytany this depends on your use case. Masking images from the start is possible and can make sense when you have movement in the background or are using a turntable. But in some cases it can decrease the SfM quality, as the surrounding information can be beneficial for featureMatching.

endolith commented 3 years ago

@fabiencastan

In the meantime, a simple workaround is to apply your mask on the input pixel (RGB=0). There is no drawback in doing that.

If you mean just masking out unwanted artifacts from the input images, this doesn't work for me. Meshroom considers the black to be part of the image and gets very confused:

Meshroom with black cutout Meshroom with black cutout Meshroom with black cutout

Is it ok to crop input images to remove unwanted objects? Or does that screw up the lens correction, to not have the lens distortion centered and to have input images with different aspect ratios?

Is it ok to chop images into smaller (possibly overlapping) pieces that don't include the unwanted sections, and process each sub-image as separate input images? [Context]

natowi commented 3 years ago

At the moment you can not mask out different parts in multiple images. You can only mask out the background.

Unwanted areas should be avoided when taking the the images. Do not change the dimensions of your images by cropping. Meshroom may work with cropped images if the metadata is removed (ideally), but it is not supported or recommended

Meshroom relies on the metadata and correct image dimensions for an accurate reconstruction

endolith commented 3 years ago

@natowi

Meshroom may work with cropped images if the metadata is removed, but it is not supported or recommended

Why would it require the metadata to be removed?

adam-bielanski commented 3 years ago

@natowi - I tried to mask manually various pictures in Photoshop and I must say that, strangely, the white mask worked guite ok whilst the black one gave terrible results.

natowi commented 3 years ago

Why would it require the metadata to be removed?

Because the sensor information does not match the image resolution. Removing the metadata results in Meshroom estimating the parmeters.

the white mask worked guite ok whilst the black one gave terrible results.

Good to know, thank you. @endolith you could give this a try