murdockfpv / murdockfpv.github.io

Build a Jekyll blog in minutes, without touching the command line.
Other
0 stars 1 forks source link

Photogrammetry Explained #71

Closed juliusakula closed 2 years ago

juliusakula commented 2 years ago

layout: post title: Photogrammetry explained preview: thumbnail:

Photogrammetry is a big word - you can think it of as the combination of the words "Photography" and "Trigonometry". I thought I'd outline my process in this blog post.

Scanning a mountain river

The reason I chose this location was my interest in seeing how the photogrammetry software would handle the reflections of the water, and this area had large changes in elevation which is the kind of thing that I am interested in visualizing. I spent 50 minutes flying and captured 289 geo-referenced photos. Each photo had between 4,000 and 15,000 tie points.

The general idea with photogrammetry is the use of tie points. These are unique targets within a photo which can be identified in other photos. Take a look at these 3 photos:

As the drone flies over this area it sees the parking lot/turnaround in the top left of the frame.

It gets directly above the parking lot, and it is in the middle-left of the frame

Now the drone has passed the parking lot, and it appears in the lower-left of the frame.


By taking the GPS metadata from these photos, and treating the images like data, RealityCapture is able to create a colorized point cloud based on calculations it does with the data contained in the images.

1920 x 1080 pixels is a common resolution. This is 1,920 pixels wide, and 1,080 pixels tall. If an object of interest (captured by the camera) was in the image 200 pixels over, and 500 pixels down, and we (RealityCapture) know the physical dimensions of the camera (50mm focal length, for example) -- it can cross reference that to another photo where the object is 200 pixels over, and 800 pixels down. It then can draw a triangle between the two geo-positioned photos, and then to the calculated position of the object. This process is accurate to less than 5cm (!).

From this point cloud, we are able to render a 3D mesh. This takes all of those millions of millions of raw points from the generated point cloud, and turns them into connected triangles (a 3D mesh). After that, you can generate a texture for the mesh. It's really incredible. Let's take a look. My computer gives me warnings when I go above 40 million triangles, so I took a screenshot of this with 39,900,000 triangles:

Now, I've reduced the triangle count to half a million here, so that it loads in your browser. You can zoom around and interact with the model below.

Let's improve this model. Before you put your images into RealityCapture, it is a good idea to pre-process your images with something like Adobe Lightroom. Look at the collage below and think about it.

Those are before and after photos, after a very quick lightroom touch-up. It is possible to copy and paste lightroom settings to an entire folder full of aerial data and quickly touch up all your photos. And then it is possible to export every touched-up photo with one click. So from 289 photos, its about a dozen clicks to batch-process them all, if you use the same lightroom edit settings across the whole set of photos.

So lets look at how much better the model looks when we touch up all our photos beforehand (again 500,000 triangles):

<< MODEL W PREPROCESS >>

Each photo used to generate this had a true 20MP resolution of 5472 x 3648. Lightroom was used to adjust exposure, contrast, shadows, highlights, apply de-hazing and up the saturation. If you want to know the quickest way to batch pre-process tons of photos for photogrammetry, import all the drone photos into Lightroom and select 1 photo - click "Auto", add your own style, then copy and paste to those settings to the whole photo set. Now every photo will have de-haze, so you get less white noise washing out every photo, which then affects the texture of the model when rendered. Every photo is artificially more saturated, so the model comes out more colorful as well. The metadata is preserved so a 3D model can still be rendered using edited photos, and I recommend doing this quick step.