nmoehrle / mvs-texturing

Algorithm to texture 3D reconstructions from multi-view stereo images
Other
974 stars 334 forks source link

How to set camera parameters? #52

Closed parvizimosaed closed 7 years ago

parvizimosaed commented 8 years ago

Hi, I have reconstructed mesh models with another application. Now, when I set camera parameters and apply textures on models, a strange shift is visible in all models! I have used the following configuration: 1- Using Lens Distortions K1 and K2 2- Principal pointX=0.5-(my principal pointX / X dimension of sensor) 3- Principal pointY=0.5-(my principal pointY / Y dimension of sensor) 4- Aspect Ratio ~1 5- Focal lenght=Focal length/largest dimension of sensor Moreover, I used tex::image_undistort_bundler and text::image_undistort_vsfm for making undistorted images. I am sure that camera parameters and point clouds have been created with high accuracy, but I cannot find the causes of this problem. Is there any assumption in setting camera parameters that I have missed?

nmoehrle commented 8 years ago

The principal point should just be your principal point (in pixels) / dimension of the sensor (in pixels) and is usually 0.5. Subtracting that from 0.5 could explain the shift.

parvizimosaed commented 8 years ago

Many thanks Nils, I have used the following equations for Principal Points. The result is depicted in the attached image. As you see, a strange shift is obvious in the image.

My questions are:

  1. If principal pointX and principal pointY have different values and Aspect ratio is not 1, does MVE-Texturing still work correctly?
  2. You have used Intrinsic Parameter Matrix while this matrix does not support lens distortion parameters. Is this matrix the cause of the aforementioned deformation?

I appreciate you if answer these questions and hint at some points to resolve this problem. 1

nmoehrle commented 8 years ago

Undistortion of the images is done prior to the texturing either inside sfmrecon for MVE scenes or during the import (NVM bundle file and cam files) so this should not be an issue.

What scene format are you using? If you use a NVM bundle you can create a MVE scene from it using makescene an verify the undistortion.

Principal point and aspect ratio are completely separate so the should not be an issue with that. However, there is still the question why you subtract you shift and not add it to 0.5?

parvizimosaed commented 8 years ago

I use .cam files and set non zero values for lens distortions k1 and k2, principal points, aspect ratio and focal length. Camera parameters and point clouds are created by using Pix4Dmapper application. Previously, since Rotation and Transition matrices of Pix4Dmapper were not suitable inputs for MVE-Texturing, I had to convert both matrices. Experimentally, I found that shift of principal point may be subtracted from 0.5! if I add them, the result is worse, as this figure shows. snapshot00 Another amazing result is obvious in the following picture. I have determined important parts of the picture by arrows. Red arrows show that two adjacent atlases could be aligned properly; conversely, the green arrows show that they were aligned correctly! picture1

nmoehrle commented 8 years ago

I am unsure if Pix4D is using the same undistortion model do you have the option to let Pix4D undistort the images?

parvizimosaed commented 8 years ago

Users can trace the following path to export undistorted images from Pix4D: Processing Optioins->Initial Processing->Calibration->export undistorted images I have used undistorted images of Pix4Dmapper and ignored MVE undistortion capabilities in one exam, and also I have made undistortion images in MVE by using Pix4Dmapper camera parameters in another test. Both results were bad. However, Pix4D maps textures perfectly over models by the same point cloud, undistorted images and camera parameters. The below model is a result of Pix4D. snapshot00 I guess that Principal point, focal length or aspect ratio of MVE and Pix4D are computed in different ways! As I said, I had to convert MVE transition matrix with T=(-1)R*T.

nmoehrle commented 8 years ago

My script for converting Pix4D cam files into MVE scenes ignores distrortions and is essentially

Did you encounter views with different focal lengths for x and y?

parvizimosaed commented 8 years ago

No, fx=fy in all tests. In addition, I have used these equations but results are not good. I guess this error does not rely on my point cloud or camera parameters because I executed your commands for Der Hass dataset and I got same error, as following figure shows! I did not changed any command except adding --skip_global_seam_leveling and --skip_local_seam_leveling to texrecon in order to understand how much shift is visible among atlases. snapshot00 If you need the exported point clouds and textures, you can download them from here. Is this error reasonable? How can I increase its precision?

nmoehrle commented 8 years ago

I had a look at the Der Hass reconstruction and the errors are reasonable. What you see there should be flat stones and the geometry shows something very different resulting in imprecise texture mappings.

Assuming perfect camera calibration you have to assume texture placement errors of the same magnitude as geometric errors. In this case we have an geometric error of about a gap and a texture shift of half a gap due to a non orthogonal view. Such shifts are only implicitly minimized through the preferred selection of orthogonal views.

If I remember corretly we focused on the statue while capturing this dataset and the socket has not been captured very well, mostly sliver angles.

The geometric accuracy of the reconstruction has a high influence on the resulting texture quality.

Our group recently released a new multi-view stereo algorithm that should enhance the reconstruction quality, especially the reconstruction of textureless areas and normals:

Shading-aware Multi-view Stereo Fabian Langguth, Kalyan Sunkavalli, Sunil Hadap, Michael Goesele In: Proceedings of the European Conference on Computer Vision (ECCV), 2016 Paper Code

parvizimosaed commented 7 years ago

Many thanks Nils, I examined several tests to recognize the aforementioned defect. First, I generated some pointclouds by another application and made undistorted Images. I used Back projection method to prove that points, undistorted images and focal length are computed perfectly. Then, I did following steps to detect defects:

  1. I triangulated Pointclouds by Poisson Reconstruction algorithm and level of detail 12 to verified that points lay on surface. Figure1 is an undistorted image. Figure2 shows the projection of Figure1 on mesh. The final figure shows that points are near to surface.

dsc_1199 jpg__undistorted

![snapshot202](https://cloud.githubusercontent.com/assets/21953121/19436680/7934516a-947e-11e6-8b36-220e49bea78b.png) ![snapshot01](https://cloud.githubusercontent.com/assets/21953121/19436682/81b233a2-947e-11e6-8c7b-10cce348fc12.png) 1. I compared corresponding triangles of textured and mesh models. It shows that there is not any difference between location of faces and vertices of both models. 2. I exported coordination of vertices from TextureView::get_pixel_coords() and compared it with X1=R'(X2-T) where X2 is Cartesian coordination of a point in pointcloud, T is transition matrix and R' is the inverse of rotation matrix. Both results are similar. This test also illustrates that mvs-texturing works correctly till get_pixel_coords(), but final textured model has 4 pixel shift! This shift is obvious when camera projects texture on far distances. For example, in the below image, I used image92 and image96. The result shows that the both texture have shifted. ![untitled](https://cloud.githubusercontent.com/assets/21953121/19436692/8bcf4c80-947e-11e6-83e0-5d7d39e9e4d0.png) In the Der Hass dataset, this shift is not obvious because images are perpendicular to mesh. I guess that your code changes UV coordination after get_pixel_coords()!! Unfortunately, I could not find where.
nmoehrle commented 7 years ago

Since the uv coordinates that I obtain through get_pixel_coords() are relative to the input image I have to alter them a couple of times to obtain the absolute texture coordinates within a texture atlas. Coordinate system changes:

I have changed the final texture coords computation in a118fb56 after experiencing issues.

I don't see where the shift is introduced but this requires some further investigation.

parvizimosaed commented 7 years ago

Thank you so much Nils. I looked at the mentioned lines of code and I discovered that if I add 1 value to texcoord[0] and texcoord[1] in texture_atlas.cpp#93_94, the error will be removed in the below dataset. snapshot00 snapshot101 But I had to add 2 values to the below dataset! snapshot101 snapshot00

I guess the error depends on padding and atlas size! In these two cases, the following roles have been discovered. if (this->size=1024 and padding=8)->we should add 1 point to textcoord if (this->size=4096 and padding=32)->we should add 2 point to textcoord I traced the process of converting XY coordination to UV. It seems true, but you have added some offsets, converted double values to integer in some locations(such as finding minimum/maximum values) or cropped images (according to minimum/maximum values of coordination points) and added padding values. These conversion can make error! Unfortunately I do not still find the cause of error. Can you guess what lines of code have potential of making this error?

soulslicer commented 7 years ago

Hi all, Can I know how you setup this application via command line? I have a mesh reconstructed from an external application just like you have, and I want to texture mesh it with this application. I have no idea how to begin/proceed. I have all my JPG texture files and the camera transforms and intrinsics

nmoehrle commented 7 years ago

You have to create a MVE scene from the images and camera parameters. Once you have setup the files in this structure you can texture your mesh with this command texrecon scene::undistorted mesh.ply model, assuming you called the base folder scene and put your undistorted JPGs as undistorted.jpg into the view folders.

soulslicer commented 7 years ago

Hmm..I have read through that documentation and I am having difficulty understanding how to convert over to that format. So I have my files as such now (PLY file of textureless mesh, Undistorted images as a jpg file, each images transform and camera matrix stored in a text file).

I'm guessing that where I run my command from, I will need a "scene" folder which is what the program references? Then I have to create within that a folder called views, and conform my data to those .mve files? It's all rather confusing. I would appreciate if you could share with me a sample mesh and a scene folder just so I can understand the structure.

Thank you

nmoehrle commented 7 years ago

You don't have to create .mve files, just .mve folders for each of your images. In these folders you then create a meta.ini file that contains the camera info (you will have to extract the focal length etc. from the camera matrix for that) and put the undistorted jpg file next to it. You can find example datasets on the MVE project website.

I wrote a couple of conversion scripts in python that can convert scene formats but didn't have the time to publish them, I hope I can do that later this week.

soulslicer commented 7 years ago

I've looked through your website, and I downloaded some of the datasets. They seem to have a .out file and a views folder, however I do not see an .ply file.

Also, Within each view I seem to have a meta.ini file which has the "focal length" as a parameter. I have fx,fy,cx,cy (4 parameters). How can this be input?

nmoehrle commented 7 years ago

The following function determines the MVE parameters for a view from the camera matrix (K) and the image dimension (width, height). Different focal lengths (fx, fy) are encoded in the pixel aspect ratio (self.paspect) the focal length (self.flen) is normalized with the larger image/sensor dimension and the principal point (self.pp) is normalized with the respective image dimension.

def set_intrinsics(self, K, width, height):                                                                                                                                                
    fx = K[0, 0]                                                                                                                                                                           
    fy = K[1, 1]                                                                                                                                                                           

    self.paspect = fy / fx                                                                                                                                                                 

    dim_aspect = width / height                                                                                                                                                            
    img_aspect = dim_aspect * self.paspect                                                                                                                                                 

    if img_aspect < 1.0:                                                                                                                                                                   
        self.flen = fy / height                                                                                                                                                            
    else:                                                                                                                                                                                  
        self.flen = fx / width                                                                                                                                                             

    ppx = K[0, 2] / width                                                                                                                                                                  
    ppy = K[1, 2] / height                                                                                                                                                                 

    self.pp = [ppx, ppy] 
soulslicer commented 7 years ago

Yeah I get this, but what about the cx Cy value

On Tuesday, April 25, 2017, Nils Moehrle notifications@github.com wrote:

The following function determines the MVE parameters for a view from the camera matrix (K) and the image dimension (width, height). Different focal lengths (fx, fy) are encoded in the pixel aspect ratio (self.paspect) the focal length (self.flen) is normalized with the larger image/sensor dimension and the principal point (self.pp) is normalized with the respective image dimension.

def set_intrinsics(self, K, width, height): fx = K[0, 0] fy = K[1, 1]

self.paspect = fy / fx

dim_aspect = width / height
img_aspect = dim_aspect * self.paspect

if img_aspect < 1.0:
    self.flen = fy / height
else:
    self.flen = fx / width

ppx = K[0, 2] / width
ppy = K[1, 2] / height

self.pp = [ppx, ppy]

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nmoehrle/mvs-texturing/issues/52#issuecomment-296996620, or mute the thread https://github.com/notifications/unsubscribe-auth/ADC5bJ7Eljc8Y-1zGxt9fmJhzSfoIxclks5rzdQPgaJpZM4JzqgW .

nmoehrle commented 7 years ago

I don't know what a you could mean with a cx and cy value other than the principal point. You'll have to give me more than the just the variable names, conventions differ...

soulslicer commented 7 years ago

Ah my bad. I meant the principal points

On Tuesday, April 25, 2017, Nils Moehrle notifications@github.com wrote:

I don't know what a you could mean with a cx and cy value other than the principal point. You'll have to give me more than the just the variable names, conventions differ...

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nmoehrle/mvs-texturing/issues/52#issuecomment-297044739, or mute the thread https://github.com/notifications/unsubscribe-auth/ADC5bLpqqlvGAtwfRW1egA5no0LP8_SPks5rzgCcgaJpZM4JzqgW .

soulslicer commented 7 years ago

In the sample files given, the only Parameter visible was the focal length value. I'm aware of how your script works, but I need to know what are the key value pairs I need to input into the file to say set the pp value etc.

nmoehrle commented 7 years ago

https://github.com/simonfuhrmann/mve/wiki/MVE-File-Format#the-metaini-format

debuleilei commented 5 years ago

@soulslicer I'm wondering weather you have use mvs-textureing successfully? I met a same situation as you.