Closed parvizimosaed closed 7 years ago
The principal point should just be your principal point (in pixels) / dimension of the sensor (in pixels) and is usually 0.5. Subtracting that from 0.5 could explain the shift.
Many thanks Nils, I have used the following equations for Principal Points. The result is depicted in the attached image. As you see, a strange shift is obvious in the image.
My questions are:
I appreciate you if answer these questions and hint at some points to resolve this problem.
Undistortion of the images is done prior to the texturing either inside sfmrecon for MVE scenes or during the import (NVM bundle file and cam files) so this should not be an issue.
What scene format are you using? If you use a NVM bundle you can create a MVE scene from it using makescene
an verify the undistortion.
Principal point and aspect ratio are completely separate so the should not be an issue with that. However, there is still the question why you subtract you shift and not add it to 0.5?
I use .cam files and set non zero values for lens distortions k1 and k2, principal points, aspect ratio and focal length. Camera parameters and point clouds are created by using Pix4Dmapper application. Previously, since Rotation and Transition matrices of Pix4Dmapper were not suitable inputs for MVE-Texturing, I had to convert both matrices. Experimentally, I found that shift of principal point may be subtracted from 0.5! if I add them, the result is worse, as this figure shows. Another amazing result is obvious in the following picture. I have determined important parts of the picture by arrows. Red arrows show that two adjacent atlases could be aligned properly; conversely, the green arrows show that they were aligned correctly!
I am unsure if Pix4D is using the same undistortion model do you have the option to let Pix4D undistort the images?
Users can trace the following path to export undistorted images from Pix4D: Processing Optioins->Initial Processing->Calibration->export undistorted images I have used undistorted images of Pix4Dmapper and ignored MVE undistortion capabilities in one exam, and also I have made undistortion images in MVE by using Pix4Dmapper camera parameters in another test. Both results were bad. However, Pix4D maps textures perfectly over models by the same point cloud, undistorted images and camera parameters. The below model is a result of Pix4D. I guess that Principal point, focal length or aspect ratio of MVE and Pix4D are computed in different ways! As I said, I had to convert MVE transition matrix with T=(-1)R*T.
My script for converting Pix4D cam files into MVE scenes ignores distrortions and is essentially
flen = K[0,0] / max(width, height)
K[0,0] == K[1, 1]
therefore fx == fyppx = K[0,2] / width
and ppy = K[1,2] / height
t = -R * c
Did you encounter views with different focal lengths for x and y?
No, fx=fy in all tests. In addition, I have used these equations but results are not good. I guess this error does not rely on my point cloud or camera parameters because I executed your commands for Der Hass dataset and I got same error, as following figure shows! I did not changed any command except adding --skip_global_seam_leveling and --skip_local_seam_leveling to texrecon in order to understand how much shift is visible among atlases. If you need the exported point clouds and textures, you can download them from here. Is this error reasonable? How can I increase its precision?
I had a look at the Der Hass reconstruction and the errors are reasonable. What you see there should be flat stones and the geometry shows something very different resulting in imprecise texture mappings.
Assuming perfect camera calibration you have to assume texture placement errors of the same magnitude as geometric errors. In this case we have an geometric error of about a gap and a texture shift of half a gap due to a non orthogonal view. Such shifts are only implicitly minimized through the preferred selection of orthogonal views.
If I remember corretly we focused on the statue while capturing this dataset and the socket has not been captured very well, mostly sliver angles.
The geometric accuracy of the reconstruction has a high influence on the resulting texture quality.
Our group recently released a new multi-view stereo algorithm that should enhance the reconstruction quality, especially the reconstruction of textureless areas and normals:
Shading-aware Multi-view Stereo Fabian Langguth, Kalyan Sunkavalli, Sunil Hadap, Michael Goesele In: Proceedings of the European Conference on Computer Vision (ECCV), 2016 Paper Code
Many thanks Nils, I examined several tests to recognize the aforementioned defect. First, I generated some pointclouds by another application and made undistorted Images. I used Back projection method to prove that points, undistorted images and focal length are computed perfectly. Then, I did following steps to detect defects:
Since the uv coordinates that I obtain through get_pixel_coords()
are relative to the input image I have to alter them a couple of times to obtain the absolute texture coordinates within a texture atlas.
Coordinate system changes:
I have changed the final texture coords computation in a118fb56 after experiencing issues.
I don't see where the shift is introduced but this requires some further investigation.
Thank you so much Nils. I looked at the mentioned lines of code and I discovered that if I add 1 value to texcoord[0] and texcoord[1] in texture_atlas.cpp#93_94, the error will be removed in the below dataset. But I had to add 2 values to the below dataset!
I guess the error depends on padding and atlas size! In these two cases, the following roles have been discovered. if (this->size=1024 and padding=8)->we should add 1 point to textcoord if (this->size=4096 and padding=32)->we should add 2 point to textcoord I traced the process of converting XY coordination to UV. It seems true, but you have added some offsets, converted double values to integer in some locations(such as finding minimum/maximum values) or cropped images (according to minimum/maximum values of coordination points) and added padding values. These conversion can make error! Unfortunately I do not still find the cause of error. Can you guess what lines of code have potential of making this error?
Hi all, Can I know how you setup this application via command line? I have a mesh reconstructed from an external application just like you have, and I want to texture mesh it with this application. I have no idea how to begin/proceed. I have all my JPG texture files and the camera transforms and intrinsics
You have to create a MVE scene from the images and camera parameters. Once you have setup the files in this structure you can texture your mesh with this command texrecon scene::undistorted mesh.ply model
, assuming you called the base folder scene
and put your undistorted JPGs as undistorted.jpg
into the view folders.
Hmm..I have read through that documentation and I am having difficulty understanding how to convert over to that format. So I have my files as such now (PLY file of textureless mesh, Undistorted images as a jpg file, each images transform and camera matrix stored in a text file).
I'm guessing that where I run my command from, I will need a "scene" folder which is what the program references? Then I have to create within that a folder called views, and conform my data to those .mve files? It's all rather confusing. I would appreciate if you could share with me a sample mesh and a scene folder just so I can understand the structure.
Thank you
You don't have to create .mve files, just .mve folders for each of your images. In these folders you then create a meta.ini file that contains the camera info (you will have to extract the focal length etc. from the camera matrix for that) and put the undistorted jpg file next to it. You can find example datasets on the MVE project website.
I wrote a couple of conversion scripts in python that can convert scene formats but didn't have the time to publish them, I hope I can do that later this week.
I've looked through your website, and I downloaded some of the datasets. They seem to have a .out file and a views folder, however I do not see an .ply file.
Also, Within each view I seem to have a meta.ini file which has the "focal length" as a parameter. I have fx,fy,cx,cy (4 parameters). How can this be input?
The following function determines the MVE parameters for a view from the camera matrix (K
) and the image dimension (width, height
). Different focal lengths (fx
, fy
) are encoded in the pixel aspect ratio (self.paspect
) the focal length (self.flen
) is normalized with the larger image/sensor dimension and the principal point (self.pp
) is normalized with the respective image dimension.
def set_intrinsics(self, K, width, height):
fx = K[0, 0]
fy = K[1, 1]
self.paspect = fy / fx
dim_aspect = width / height
img_aspect = dim_aspect * self.paspect
if img_aspect < 1.0:
self.flen = fy / height
else:
self.flen = fx / width
ppx = K[0, 2] / width
ppy = K[1, 2] / height
self.pp = [ppx, ppy]
Yeah I get this, but what about the cx Cy value
On Tuesday, April 25, 2017, Nils Moehrle notifications@github.com wrote:
The following function determines the MVE parameters for a view from the camera matrix (K) and the image dimension (width, height). Different focal lengths (fx, fy) are encoded in the pixel aspect ratio (self.paspect) the focal length (self.flen) is normalized with the larger image/sensor dimension and the principal point (self.pp) is normalized with the respective image dimension.
def set_intrinsics(self, K, width, height): fx = K[0, 0] fy = K[1, 1]
self.paspect = fy / fx dim_aspect = width / height img_aspect = dim_aspect * self.paspect if img_aspect < 1.0: self.flen = fy / height else: self.flen = fx / width ppx = K[0, 2] / width ppy = K[1, 2] / height self.pp = [ppx, ppy]
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nmoehrle/mvs-texturing/issues/52#issuecomment-296996620, or mute the thread https://github.com/notifications/unsubscribe-auth/ADC5bJ7Eljc8Y-1zGxt9fmJhzSfoIxclks5rzdQPgaJpZM4JzqgW .
I don't know what a you could mean with a cx and cy value other than the principal point. You'll have to give me more than the just the variable names, conventions differ...
Ah my bad. I meant the principal points
On Tuesday, April 25, 2017, Nils Moehrle notifications@github.com wrote:
I don't know what a you could mean with a cx and cy value other than the principal point. You'll have to give me more than the just the variable names, conventions differ...
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/nmoehrle/mvs-texturing/issues/52#issuecomment-297044739, or mute the thread https://github.com/notifications/unsubscribe-auth/ADC5bLpqqlvGAtwfRW1egA5no0LP8_SPks5rzgCcgaJpZM4JzqgW .
In the sample files given, the only Parameter visible was the focal length value. I'm aware of how your script works, but I need to know what are the key value pairs I need to input into the file to say set the pp value etc.
@soulslicer I'm wondering weather you have use mvs-textureing successfully? I met a same situation as you.
Hi, I have reconstructed mesh models with another application. Now, when I set camera parameters and apply textures on models, a strange shift is visible in all models! I have used the following configuration: 1- Using Lens Distortions K1 and K2 2- Principal pointX=0.5-(my principal pointX / X dimension of sensor) 3- Principal pointY=0.5-(my principal pointY / Y dimension of sensor) 4- Aspect Ratio ~1 5- Focal lenght=Focal length/largest dimension of sensor Moreover, I used tex::image_undistort_bundler and text::image_undistort_vsfm for making undistorted images. I am sure that camera parameters and point clouds have been created with high accuracy, but I cannot find the causes of this problem. Is there any assumption in setting camera parameters that I have missed?