cvlab-epfl / tf-lift

Tensorflow port of LIFT (ECCV 2016), with training code.
197 stars 55 forks source link

How can i get the training images and testing images #1

Closed amusi closed 7 years ago

amusi commented 7 years ago

I try to use pip command to configure the project and i meet the problem"PermissionError: [Errno 13] Permission denied: '/cvlabdata2'" . Then , I open the config.py and find the reason."/cvlabdata2/home/{}/Datasets/" is default path. My request is as follows:

  1. Can you upload some datasets include training imags and testing images?
  2. Write an requirements.txt for user to use with pip

Thank you for your help

kmyi commented 7 years ago

Hello,

Can you upload some datasets include training imags and testing images?

For the images, we used publicly available datasets, so you should be able to get them from respective dataset websites. For the SfM models, they are simply too big for us to host. You can get them using Visual SfM, which is the software we use.

We will probably provide some more details on how to create the dataset, but in short the files that the current release looks for is as follows:

  1. list of images for each split (eccv.py:365)
  2. key point file for each jpg, (hdf5) which holds two fields: valid_keypoints, other_keypoints (eccv.py:117)
  3. histogram of all keypoint scales in all images (eccv.py:354)

You would of course set the paths properly according to your environment as well.

  1. Write an requirements.txt for user to use with pip

We will do that when we can. But the ones in the readme should be good enough.

Cheers, Kwang

arminpcm commented 7 years ago

Hi,

Thanks for sharing your great work. Can you possibly provide the trained model for TensorFlow? Along with other things that one might need to run the code (probably scale and std for normalization, etc.)?

Thanks, Armin

etrulls commented 7 years ago

Hi,

We'll share models at some point, but we're still re-training and trying out different configurations.

Best, E.

On Wed, Oct 25, 2017 at 10:34 PM, ArminPCM notifications@github.com wrote:

Hi,

Thanks for sharing your great work. Can you possibly provide the trained model for TensorFlow? Along with other things that one might need to run the code (probably scale and std for normalization, etc.)?

Thanks, Armin

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cvlab-epfl/tf-lift/issues/1#issuecomment-339462637, or mute the thread https://github.com/notifications/unsubscribe-auth/AFCu2wSOA2-jU-50ydmLP5zT3kfO73AOks5sv5tOgaJpZM4QCofq .

SergazyK commented 6 years ago

One can convert Theano pre-trained model to Tensorflow model in the original repository: https://github.com/cvlab-epfl/LIFT.

punisher220 commented 4 years ago
Excuse me, I have a question. I want to perform the train pipeline of Piccadilly dataset as the paper did. I downloaded the Piccadilly raw image dataset and I found there are 7351 images totally. I have tried Visual SFM software with small image dataset then I noticed that the software VSFM will compute and match keypoints with SIFT for each available image pair. Then for each image, a .mat file and a .sift file will be created. 
In other words, suppose there are N images in dataset, Visual SFM will do N(N-1)/2 times matching if you want to finish the process once. If I run the matching pipeline with Visual SFM for the dataset Piccadilly in one go, it will be a long time to finish the .mat file and .sift file generation for each image. 
I wonder whether you have divided the Piccadilly into several groups to run the Visual SFM. Or you just finish all of the matching process with Visual SFM in one go and you have waited for a long time. If you have divided the Piccadilly into groups, how many groups have you divided it into?
kmyi commented 4 years ago

Please do not necro closed threads. From what I recall, we waited a very long time to create the dataset. @etrulls ?

punisher220 commented 4 years ago

OK, so you finished the Piccadilly images matching in Visual SFM in one go. Another question, in the LIFT paper you mentioned there are 3384 images in Piccadilly, but I downloaded the Piccadilly through : http://www.cs.cornell.edu/projects/1dsfm/ Then I found that there are 7351 images at all. Some images are just displaying string: This photo is currently unavailable. In addition, some images are just displaying people faces which are unreasonable for the train dataset. So I guess you selected those images displaying large attractions and you selected 3384 ones from them (I guess). My question is: How do you define the selection? Or could you please share your selected 3384-version Piccadilly data used in your paper in training?

Thank you.

etrulls commented 4 years ago

Matching used default values IIRC, which I imagine means exhaustive.

We did not subsample. We probably used all of them and those are the ones VSFM registered.

punisher220 commented 4 years ago

Thank you for your reply. I have another question, how to read the data in .mat file and .sift file in your example folder? They are not readable under h5py or .txt format.

etrulls commented 4 years ago

You don't have to, VisualSFM produces those, we don't use them directly.

punisher220 commented 4 years ago

Excuse me, so how do you use them(produced from Visual SFM) to create the h5 files(especially the 98263187_ff51c42285_o-kp-minsc-2.0.h5 in your example folder) or what did you use to create the h5 files?

punisher220 commented 4 years ago

Excuse me, @etrulls I need to check your nv.ini file in Visual SFM software folder(When you generate the mat files, sift files and nvm file from Piccadilly dataset) for running Configuration confirm.

I did the matching and reconstruction in Visual SFM with default settings but I failed to get the 3D reconstruction effect displayed in the link: http://www.cs.cornell.edu/projects/1dsfm/ My reconstruction point-cloud view is obviously wrong(I only got a few chaotic points).

I want to confirm whether you have changed some parameters in Visual SFM to get the proper reconstruction from Piccadilly. So if you are convenient, could you please share your nv.ini file in your Piccadilly-Generation in Visual SFM pipeline to the tf-lift repo? Thank you.

etrulls commented 4 years ago

I need to check your nv.ini file in Visual SFM software folder(When you generate the mat files, sift files and nvm file from Piccadilly dataset) for running Configuration confirm.

Sorry, had a look but I don't have such a file. Either it wasn't being generated back then or it's been deleted since.

I did the matching and reconstruction in Visual SFM with default settings but I failed to get the 3D reconstruction effect displayed in the link: http://www.cs.cornell.edu/projects/1dsfm/ My reconstruction point-cloud view is obviously wrong(I only got a few chaotic points).

I want to confirm whether you have changed some parameters in Visual SFM to get the proper reconstruction from Piccadilly.

Pretty sure we were using default settings, or very close to it.

punisher220 commented 4 years ago

Thank you for your reply. @etrulls When you first run the Visual SFM.exe in a computer in Windows system, the folder will create a file named nv.ini automatically.

I have changed the nv.ini for matching speedup because I have to admit it took me a long time to do the matching for 7351 images in Piccadilly. I only changed the _param_gpu_matchfmax 8192 to _param_gpu_matchfmax 4096 in nv.ini file for matching speedup

etrulls commented 4 years ago

We did this on linux. I seem to recall seeing an ini file, but I'm not sure, it's been a while.

It took a long time on our end as well, but we had powerful clusters (and time).

punisher220 commented 4 years ago

Clusters, you mean you did the long time matching work with multi-threads or anything else?

etrulls commented 4 years ago

I don't remember, but again, we were using a default set-up. I doubt it was well parallelized.