Closed xmof closed 5 years ago
OK, problem semi-solved: I converted with Imagemagick from tif to jpg, didn't help much, still Skipping file, cannot load image. I just converted tif to png, just with "convert image.tif image.png" well, that seems to work: says "Importing image: ... , writing MVE view ... etc.
I don't really understand why jpg and tiff don't work out fine (maybe I will try the extensions .jpeg and .tiff instead of .jpg and .tif ...) on the other hand, it is just a matter of few time to convert to png images anyway.
To add a global include path, use Makefile.inc
and append it to the COMPILE.cc
variable. Note however, that it should not be necessary to add the include path for, e.g., libpng. Your /usr/include
should contain png.h
, jpeglib.h
, etc., which is the default include path. Provided that you installed libpng-dev
, libjpeg-dev
, or the correspondingly named packages on your system.
Why the JPEG and TIFF images don't work, however, is unclear. MVE provides support (also in makescene
) for TIFF, PNG and JPEG. In fact, converting to PNG might have adverse effects as the EXIF tags (in case you want to run SfM) get lost, and focal length information may not be available.
Hi Simon, Thank you very much, I will look more into the details of the include directory and search paths.
For the moment, with the png the indicated pipeline drom the tutorial on the webpage of MVE seems to run. Perhaps the quqlity is lower than what could be of the correct exif data are available, but for now, I am only testing if it runs and it seems to do that. After the weekend I will look into more detail of the particular options.
The dataset I use for the moment is an iPhone mov file of a chair in the company restaurant, converted to tif with ffmpeg and further to png. Probably, I have to check with exiftool, there is no good exifdata available.
Cheerio, Hans
Better a bottle in front of me than a frontal lobotomy!
On 07/03/2019 at 22:23, Simon Fuhrmann wrote:
To add a global include path, use
Makefile.inc
and append it to theCOMPILE.cc
variable. Note however, that it should not be necessary to add the include path for, e.g., libpng. Your/usr/include
should containpng.h
,jpeglib.h
, etc., which is the default include path. Provided that you installedlibpng-dev
,libjpeg-dev
, or the correspondingly named packages on your system.Why the JPEG and TIFF images don't work, however, is unclear. MVE provides support (also in
makescene
) for TIFF, PNG and JPEG. In fact, converting to PNG might have adverse effects as the EXIF tags (in case you want to run SfM) get lost, and focal length information may not be available.-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/simonfuhrmann/mve/issues/469#issuecomment-470699895
Hello mr. Fuhrmann,
With respect to my previous question, about not recognizing tiff images, everything works fine now. Via ffmpeg I transformed a movie to non packed bits images, they are all read in. I also added with exiftool the aperture and focallength and 35 mm equiv focallength back in the images.
The question that Ihave now is if and how to submit an mve job to a cluster, mpi based. Is there intrinsic support, or can I at least split the job in multiple jobs by e.g. submitting mve to different nodes with different image directories, and merge the results together one or the other way? I couldnt find much aboutit on your websites at github and TU Darmstadt, however, it seems to me that more parts of the matches finding and further reconstruction could be split in sub processes.
If I submit with e.g. mpirun -f machinefile -n 18 to 18 processors on 5 nodes only one node is activated. I think because that is the node that is first used by makescene, the others than produce a warning that the scene directories already exist. And wait for ctrl c or return. My scenes directory is shared over nfs, for your information.
Submitting to different nodes with different parts of the image set and merge results later on could be a solution for that.
If you have a hint for me than I would be very grateful.
About the reason for me starting reconstructions: I need to record the outer dimensions of ships hulls, in particular the underwater parts. Normally this is a lot of work, and expensive, with laserscanners, or measurements by hand. My idea, when a vessel is dry docked, fly around it with a drone, reconstruct the hull and that should be it.
Much less work, much less expensive.
Probably also much quicker in the end.
I hope for your answer,
Regards, Dr. J.A. Piest Nijmegen The Netherlands
Better a bottle in front of me than a frontal lobotomy!
On 07/03/2019 at 22:23, Simon Fuhrmann wrote:
To add a global include path, use
Makefile.inc
and append it to theCOMPILE.cc
variable. Note however, that it should not be necessary to add the include path for, e.g., libpng. Your/usr/include
should containpng.h
,jpeglib.h
, etc., which is the default include path. Provided that you installedlibpng-dev
,libjpeg-dev
, or the correspondingly named packages on your system.Why the JPEG and TIFF images don't work, however, is unclear. MVE provides support (also in
makescene
) for TIFF, PNG and JPEG. In fact, converting to PNG might have adverse effects as the EXIF tags (in case you want to run SfM) get lost, and focal length information may not be available.-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/simonfuhrmann/mve/issues/469#issuecomment-470699895
There is no support for parallelizing the SfM part on the pipeline. In theory, feature detection and matching could be split up on multiple machines and the resulting matches could be merged into a single file. But no work has been done on that. The depth map reconstruction, however, can trivially be parallelized. Each dmrecon
command can be run on a separate image, or a range of images, with the options -m
or -l
respectively.
If you're willing to invest a significant amount of time to implement a variation of sfmrecon
that runs on multiple machines, I can give you some pointers, but for the time being I recommend you look into the --video-matching
option, which only matches to previous frames, but this is obviously a big limitation as it doesn't detect loops and such.
Hi Simon, Thanks for your response. The thingy with makescene calling an error on already existing directories i bypassed by running makescene on one node before calling the rest of the pipeline. I lookedinto makescene.cc, it doesnt look difficult to hack out the warnings ...
You say sfmrecon doesn t run in parallel. I am running now: mpirun -f machinefile -n 18 Construct.sh where Construct.sh is the rest ofthe pipeline after makescene
It started up 18 sfmrecon, I will wait for the result, maybe itdoes the same run 18 times if there is no parallelization, I have aleeady noticed that mpstat -P all on each node doesnt yield 100% running cores per node, whereas, the top command does give CPU = 100% for each instance of sfmrecon ....
That for now.
I will try with the videomatching option, and see if thongs are coming out good enough.
The final purpose, creating virtual ships hulls, these are normally pretty smooth, at least the relevant, underwater, parts, and dont need extreme big pointclouds,or hyperfine meshes.
Regards, Hans
Better a bottle in front of me than a frontal lobotomy!
On 29/03/2019 at 17:18, Simon Fuhrmann wrote:
There is no support for parallelizing the SfM part on the pipeline. In theory, feature detection and matching could be split up on multiple machines and the resulting matches could be merged into a single file. But no work has been done on that. The depth map reconstruction, however, can trivially be parallelized. Each
dmrecon
command can be run on a separate image, or a range of images, with the options-m
or-l
respectively.If you're willing to invest a significant amount of time to implement a variation of
sfmrecon
that runs on multiple machines, I can give you some pointers, but for the time being I recommend you look into the--video-matching
option, which only matches to previous frames, but this is obviously a big limitation as it doesn't detect loops and such.-- You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub: https://github.com/simonfuhrmann/mve/issues/469#issuecomment-478059512
I really don't understand why you would run makescene
or sfmrecon
multiple times. Just calling it more often doesn't make it faster, but slower because you're running the same processing multiple times. I'll close this for now. Feel free to reopen.
Hi there, I recently installed mve for some 3d reconstruction projects, have installed all, compilation from source required some manually added include directories (libpng, libtiff, libjpeg) therefore my question could be due to that. Issuing make gave errors e.g. png.h not found, but I had them in my /usr/include/libpng directory. creating a link to /usr/include made the errors go away, however, I compiled the lines separately by issuing with extra include directories e.g. g++ ...... -I/usr/include/libpng .... because I didnt know how to change the makefile to do that for me. Now I am trying to make scenes with makescene, (I am testing still, I issue: "apps/makescene/makescene -i" ) it creates the SCENE directory and reads from the IMAGES directory, but all it says is: "Skipping file, cannot load image ...
1) could this be due to compilation problems? 2) if so, how do I make the compiler find the right include directories which are basically subdirs of my /usr/include directory? 3) and if not so, what else could be the problem?
all image files have my usr/group permissions, so do the directories. Hope for your answer soon!