Open megaraptor1 opened 2 weeks ago
Can you show some cropped in views of the subject at full resolution? To me it looks like the photos taken from the side do not have enough depth of field and Meshroom was unable to get enough features out of the specimen itself to work out the camera positions. In the image view there is an icon with 3 dots that shows the features that were detected in the image. Take a look to see what Meshroom detected on the specimen.
Are you able to get closer to the specimen so it takes up half the image and use focus stacking to get enough of it in focus?
As to why you didn't get a mesh from the second attempt I do not know. The point cloud looks decent enough. Double click on the meshing node which should show the mesh in the viewer. How many triangles is it reporting?
Are you able to share the images so I can play around with them?
Are you able to get closer to the specimen so it takes up half the image and use focus stacking to get enough of it in focus?
No, I do not think I can get any closer. I was already getting fairly close to the minimum possible focal distance and when I tried to get closer the camera would fail to take the picture.
As to why you didn't get a mesh from the second attempt I do not know. The point cloud looks decent enough. Double click on the meshing node which should show the mesh in the viewer. How many triangles is it reporting?
When I bring the one with the decent point cloud up it says the result mesh has only 99 triangles. By contrast, the one with the distorted model produces a mesh of about 326k triangles.
In the image view there is an icon with 3 dots that shows the features that were detected in the image. Take a look to see what Meshroom detected on the specimen.
This is the one that failed to work
This is the revised version of the first specimen that produced a near-empty model
I tried opening the other project that produced an empty model, but somehow it got overwritten and is now blank.
Can you show some cropped in views of the subject at full resolution?
Here are some cropped in views of the subject at full resolution. These are the same images I showed the views with the icon with three dots..
Are you able to share the images so I can play around with them?
Yes. How may I best be able to send them to you?
No, I do not think I can get any closer. I was already getting fairly close to the minimum possible focal distance and when I tried to get closer the camera would fail to take the picture.
Would focus stacking be an option?
Yes. How may I best be able to send them to you?
Could you put them on Google Drive (or Dropbox) and share the folder?
Would focus stacking be an option?
I am not sure. The specimen is in focus with the camera with the current set of images. I have the depth of field turned up to maximum for each photo.
Could you put them on Google Drive (or Dropbox) and share the folder?
Okay, I have a folder together. Where do I need to share it?
Maybe you could paint your object to avoid flare.
I'm not allowed to. Boss doesn't want to for fear it will damage the specimen.
On Sun, Nov 10, 2024, 3:59 PM Florian Foinant-Willig < @.***> wrote:
Maybe you could paint your object to avoid flare.
— Reply to this email directly, view it on GitHub https://github.com/alicevision/Meshroom/issues/2591#issuecomment-2466915522, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARUYOO3GW25PL3SJ622AROLZ77CJTAVCNFSM6AAAAABRCLWYGKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRWHEYTKNJSGI . You are receiving this because you authored the thread.Message ID: @.***>
And what about structured lighting?
Okay, I have a folder together. Where do I need to share it?
Just paste the link here.
Here is the link to the folder.
@FlachyJoe
I am unsure what you mean by structured lighting. Do you mean a 3D surface scanner? I tried experimenting with one but I didn't get very good results, the type of scanner I had available didn't seem able to scan such a small specimen.
For the first dataset Meshroom produced a model for me:
I only did the SfM step for the second dataset, but it looked fine.
Can you check the log in the meshing node. Are there any warnings in it?
What I would try to do is to reconstruct something else using the exact same settings you used for the specimens. It doesn't have to be very detailed (30 photos will do). If that doesn't work then try again with the default settings. If it works with default settings then you could try changing settings one at a time until it no longer works.
@megaraptor1 I think about a light pattern projected on the model to increase the feature count. The light has to be fixed relative to the model so fixed on the turn-table.
@msanta
For the first dataset Meshroom produced a model for me: I only did the SfM step for the second dataset, but it looked fine. Can you check the log in the meshing node. Are there any warnings in it?
If that is the case I will have to try it again and see if it works. I don't think it will necessarily produce different results, but if you got something may be it was just random error. Did you use any alternative settings for Meshroom I need to be aware of, or did you just use the default settings?
What I would try to do is to reconstruct something else using the exact same settings you used for the specimens. It doesn't have to be very detailed (30 photos will do). If that doesn't work then try again with the default settings. If it works with default settings then you could try changing settings one at a time until it no longer works.
I have tried doing this with photographs of some other, larger objects and those generally have worked. I can try again with something small if this next attempt at meshing doesn't go well. I'm wondering if it has to do something with how close the object is to the camera, the window showing what Meshroom had detected seem to show the tooth in yellows and reds even though it was close enough for the autofocus to properly focus on it.
@FlachyJoe
I think about a light pattern projected on the model to increase the feature count. The light has to be fixed relative to the model so fixed on the turn-table.
I am still a little confused as to what you mean. Would that be something like using a light diffuser in order to prevent their from being any spots of over illumination or flare? What kind of device would produce this structured lighting?
Some of my colleagues suggested to me that maybe I should use small dabs of colored Play-Dough or print out a version of the photogrammetry guide with unique, colored marks on the paper in order to increase the number of unique features to link images up. Is that like what you're suggesting?
I've also been wondering if printing out the photogrammetry backboard and a high-resolution so lines are much sharper might help, though I am much less confident in this.
Did you use any alternative settings for Meshroom I need to be aware of, or did you just use the default settings?
For the specimen 2 set everything was left at default. For the specimen 1 set I changed these settings in the ImageMatching
node:
I don't think that actually made a difference in this case (for my projects I like to increase these values to get more matches).
By the way I am using version 2023.2 on Linux.
I am still a little confused as to what you mean. Would that be something like using a light diffuser in order to prevent their from being any spots of over illumination or flare? What kind of device would produce this structured lighting?
From what I understand the idea is to project a light pattern onto an object to allow the feature detector to find features on an object that has very few features.
This would be better than placing dots onto the object since you could take one photo without the light and one with the light. The photos with the light projection would be used for feature extraction, image matching, SfM, meshing. Then the normal photos would be used for texturing.
In any case feature extraction is not an issue here. The images have enough features. The problem is somewhere in the meshing process. I suggest trying again with a few photos using default settings.
If you are still having issues and if you need to get these objects scanned urgently you might want to try out RealityCapture.
Okay, so I tried it again and I got the same result. I didn't change anything in the ImageMatching node just to keep things consistent.
What I did was take the first 70 or so photos from the Specimen 2 folder in Dropbox and try to make a mesh out of them.
The StructureFromMotion output looked relatively decent, though it had that issue where the image is mirrored on both sides. Almost certainly a result of not using the entire image set; no matter, a great image is not needed here just a replicable one.
This is what I get for display features.
I also checked the pipeline and there were no errors anywhere that caused a loss of information or a premature truncation of the entire process.
However, once again I got a non-functional model. I end up only getting this little scrap of 151 triangles.
No clue why this is turning out this way. It cannot be the photos, as MeshRoom produced a model on your end. It also cannot be due to the computer going to sleep in the middle of the meshing process and disrupting MeshRoom; I was on the computer the entire time and it never went into sleep mode.
Could it possibly be saving the mesh somewhere else?
I managed to replicate your issue (success!). When I first tried your dataset I had the 'Downscale' value in the DepthMap node set to 4 because my video card couldn't handle the default value of 2. However now I have a better video card so I was able to run with the default downscale and the result was that a mesh was generated with only a handful of triangles.
I also tried meshing the second dataset and it failed to produce a mesh when the downscale value was 2 (a few triangles) or 4 (failed completely), however it did work with a downscale value of 8. I guess I should have tried that out earlier instead of assuming it would just work.
I have no idea why a lower downscale value causes the meshing step to fail in this way. Luckily using a downscale of 4 or 8 seems to produce acceptable results (although that is for you to decide on).
I'll have to try running it again and see if it works.
I have been having trouble getting Meshroom to properly compile images into a 3D model using photogrammetry. I have been taking pictures of several specimens using a Canon EOS 90D macro lens (so focal length and lens information are available for all photos). These specimens are fairly small, the largest is about 1.2 cm in diameter. There are about ~150 pictures taken in several rings all around the specimen at different orientation angles (see below for reconstructed cameras). The specimen remains in the same position on every image but has merely been rotated on a turntable at angles of about 8-9 degrees between each image. The specimen is also on a background with unique, non-repeating symbols or imagery to make image matching easier. The pictures are all very crisp and it is possible to make out details on them very easily, so in theory it should be relatively straightforward for Meshroom to match the images.
Nevertheless, despite this Meshroom has consistently been unable to produce models of these specimens. I have tried taking photos of these specimens on two different occasions, as well as tried to create models for multiple specimens, but have been unable to.
On the first attempt, despite the specimen being in focus and sharply defined in each photo, Meshroom simply failed to reconstruct cameras for about half of the total images.
This led to a lot of gaps in the model and a really distorted final product.
I tried taking pictures of the same specimen again from a different orientation and I did get all of the cameras (see picture below).
In practice, the StructureFromMotion model for this attempt looked better but when I opened the resulting mesh for this file in MeshLab it was completely empty and only had a few triangles (which I got out of the Texturing subfolder of the MeshroomCache folder). Additionally, I get an error saying "the following textures have not been loaded: texture_1001.exr", which suggests a texture file was not output from Meshroom.
I tried this again with a second specimen and got similar results. Again, in this case all 164 cameras were accurately reconstructed (see image) and the StructureFromMotion model would suggest the model would turn out relatively okay.
But once the process was finished when I opened the resulting model in Meshlab and there was nothing there but a few triangles.
I am unsure as to what is going wrong. I have been fairly diligent about doing things to improve mesh correlation and creation, but it doesn't seem to work. Notably, I've been able to get Meshroom to work with photographs of large objects from a distance and screenshots of a 3D model, but I haven't been able to get it to work with photos of these smaller specimens.
Desktop (please complete the following and other pertinent information):