alicevision / Meshroom

3D Reconstruction Software
http://alicevision.org
Other
11.04k stars 1.07k forks source link

[request]Light source is moving with the camera to always have a well-lit perspective #927

Open Bardo-Konrad opened 4 years ago

Bardo-Konrad commented 4 years ago

Is your feature request related to a problem? Please describe. You cannot be the light source. You have to light the object with external and fixed lights, which inevitably creates shadows in prelighted situations. Which again creates problems when light comes from a different direction in your 3D application. You have to recreate lights before you can continue in your render and you are limited by it. Also, in badly lit areas no matching points will be found or only with noise and low quality, which makes the model patchy.

Describe the solution you'd like The camera and light source should be on the same level or close, ie everything the camera sees will be well-lit, for instance by an external light fixed to the camera or the internal light of a smartphone. There will be a lot of hard shadows, which need to be taken into consideration.

Describe alternatives you've considered One or two other stationary light sources, while the brightest, the main light, still comes from the direction of the camera.

Bardo-Konrad commented 4 years ago

@ChemicalXandco Can you elaborate, why you gave the confused emoji?

ChemicalXandco commented 4 years ago

I do not understand how the solution you suggested can be integrated into the software, as the software relies on the scene having consistent lighting (color) in order to triangulate camera locations and generate depth maps, however having a moving light source is going to make the lighting inconsistent: it goes against the inherent nature of the software.

I recommend not to use flash when taking the pictures, make sure the scene is well lit, and it is possible to remove shadows using software such as agisoft de-lighter (free). Make sure to look at this page which lists important things to look for when capturing pictures. I hope this advice is useful to you :)

Bardo-Konrad commented 4 years ago

Then the core of the software needs changing. Your suggestions are often not feasible and a software delighter cannot create what isn't there.

julianrendell commented 4 years ago

Hi @Bardo-Konrad unfortunately the algorithms used - which are cutting edge, and based on years of pure math and comp-sci research, just aren't smart enough to "know" what a feature really is, and how it the same thing can look different with different lighting.

It's quite educational to turn on the "features" overlay and see what points the software has determined as "unique". It does this for each image, and then tries to compare the features between images- but with no real understanding.

There is on-going research to make this smarter; eg using neural networks. We're really just at the beginning point of turning computer vision into computer visual perception (ie software that has some understanding of what it is seeing.)

If you have the interest and skills, we'd all love to see a smarter feature detector!

But as the math and code is beyond my skills I'm having to learn how best to give the algorithm information that it can understand.

For re-creating 3D shape, it needs even lighting so features look as close to exactly the same between images, to allow for them to be detected as the same thing, and then triangulated between images. I'm learning that fewer pictures often works out better than more.

I think of delighting as not adding information, but taking away not-useful information (shadows moving etc) to the algorithm. It also sees subtle changes in colour and shade that I don't ;-).

If your goal is to create an accurate shape + accurate texturing then you can re-project a different set of images onto the generated mesh.

There is some facility to do reprojection in Meshroom, but I've been looking into Meshlab which looks to have some more powerful abilities to do reprojection (because they're human guided (ie more work) and not fully automated.) There are some good tutorials on this on YouTube. Here's an example playlist: https://www.youtube.com/playlist?list=PL60mCsep96JdC8Y7NQvLIMxx8XzXCT3iK

simogasp commented 4 years ago

@Bardo-Konrad Are u referring to the camera and light(s) of the 3D rendering widget of Meshroom?

Bardo-Konrad commented 4 years ago

@simogasp Where?

julianrendell commented 4 years ago

@simogasp i believe the request is regarding lighting at image capture, not rendering in newsroom.

MightyBOBcnc commented 4 years ago

Are there any other photogrammetry software tools that work with this sort of pipeline (moving light source) that Meshroom can be compared to?

hargrovecompany commented 3 years ago

seems to me that a solution that uses neural network might be the only way to make that happen