cvlab-epfl / LIFT

Code release for the ECCV 2016 paper
485 stars 168 forks source link

Problems arise when running ./run.sh #16

Closed hanghoo closed 7 years ago

hanghoo commented 7 years ago

Hello There are some problems when I run ./run.sh. And I can't find the way to solve it. Traceback (most recent call last): File "compute_detector.py", line 47, in from Utils.sift_tools import recomputeOrientation File "/home/hoo104/LIFT/python-code/Utils/sift_tools.py", line 48, in libSIFT = cdll.LoadLibrary('../c-code/libSIFT.so') File "/home/hoo104/anaconda3/lib/python3.5/ctypes/init.py", line 425, in LoadLibrary return self._dlltype(name) File "/home/hoo104/anaconda3/lib/python3.5/ctypes/init.py", line 347, in init self._handle = _dlopen(self._name, mode) OSError: ../c-code/libSIFT.so: undefined symbol: _ZN2cv3hal9fastAtan2EPKfS2_Pfib Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------


Test Data Module


Traceback (most recent call last): File "compute_orientation.py", line 107, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 129, in load_data_for_set kp = np.asarray(loadKpListFromTxt(kp_file_name)) File "/home/hoo104/LIFT/python-code/Utils/kp_tools.py", line 196, in loadKpListFromTxt kp_file = open(kp_file_name, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_kp.txt' Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------


Test Data Module


Traceback (most recent call last): File "compute_descriptor.py", line 111, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 135, in load_data_for_set angle = np.pi / 180.0 * kp[:, IDX_ANGLE] # store angle in radians IndexError: too many indices for array

kmyi commented 7 years ago

Hi,

It's due to compilation and library problems in your environment. I cannot help you in this case. The error message shows that you have libstdc issues.

Cheers, Kwang

      1. 03:58 Hoototo notifications@github.com 작성:

Hello, there are some problems when I run ./run.sh hoo104@amax-1080:~/LIFT$ ./run.sh Traceback (most recent call last): File "compute_detector.py", line 47, in from Utils.sift_tools import recomputeOrientation File "/home/hoo104/LIFT/python-code/Utils/sift_tools.py", line 48, in libSIFT = cdll.LoadLibrary('../c-code/libSIFT.so') File "/home/hoo104/anaconda3/lib/python3.5/ctypes/init.py", line 425, in LoadLibrary return self._dlltype(name) File "/home/hoo104/anaconda3/lib/python3.5/ctypes/init.py", line 347, in init self._handle = _dlopen(self._name, mode) OSError: /home/hoo104/anaconda3/lib/python3.5/site-packages/../../libstdc++.so.6: version `CXXABI_1.3.8' not found (required by ../c-code/libSIFT.so) Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------

Test Data Module

Traceback (most recent call last): File "compute_orientation.py", line 107, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 129, in load_data_for_set kp = np.asarray(loadKpListFromTxt(kp_file_name)) File "/home/hoo104/LIFT/python-code/Utils/kp_tools.py", line 196, in loadKpListFromTxt kp_file = open(kp_file_name, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_kp.txt' Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------

Test Data Module

Traceback (most recent call last): File "compute_descriptor.py", line 111, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 135, in load_data_for_set angle = np.pi / 180.0 * kp[:, IDX_ANGLE] # store angle in radians IndexError: too many indices for array

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

hanghoo commented 7 years ago

@kmyid Thank you very much. When I download the new file and compile it again, the problem has disappeared. But there are some others questions,like 1.TypeError: list indices must be integers or slices, not numpy.float64 2.FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_kp.txt' 3.FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_ori.txt' The first problem seems the matter of python3.x, but I am not sure. The second and third problem indicate that I need to create two files named img1_kp.txt and img1_ori.txt? I would be appreciated if you could give me some hints to continue.

hoo104@amax-1080:~/LIFT$ ./run.sh Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------

Time taken to read and prepare the image is 54.82700000000001 ms INFO: Testing double scale resize to test is [ 1.95833333 1.70482819 1.48413914 1.29201816 1.12476714 0.97916667 0.85241409 0.74206957 0.64600908 0.56238357 0.48958333 0.42620705 0.37103478 0.32300454 0.28119178 0.24479167 0.21310352 0.18551739 0.16150227 0.14059589 0.12239583] scales to test is [ 1. 1.14869835 1.31950791 1.51571657 1.74110113 2. 2.29739671 2.63901582 3.03143313 3.48220225 4. 4.59479342 5.27803164 6.06286627 6.96440451 8. 9.18958684 10.55606329 12.12573253 13.92880901 16. ] Time taken to resize image is 16.235ms WARNING (theano.sandbox.cuda): The cuda backend is deprecated and will be removed in the next release (v0.10). Please switch to the gpuarray backend. You can get more information about how to switch at this URL: https://github.com/Theano/Theano/wiki/Converting-to-the-new-gpu-back-end%28gpuarray%29

/home/hoo104/anaconda3/lib/python3.5/site-packages/theano/sandbox/cuda/init.py:556: UserWarning: Theano flag device=gpu (old gpu back-end) only support floatX=float32. You have floatX=float64. Use the new gpu back-end with device=cuda for that value of floatX. warnings.warn(msg) Using gpu device 0: GeForce GTX 1080 (CNMeM is disabled, cuDNN 5105) Traceback (most recent call last): File "compute_detector.py", line 215, in image, verbose=False) File "/home/hoo104/LIFT/python-code/Utils/solvers.py", line 148, in TestImage myNet = CreateNetwork4Image(pathconf, param, image, verbose=verbose) File "/home/hoo104/LIFT/python-code/Utils/solvers.py", line 130, in CreateNetwork4Image verbose=verbose) File "/home/hoo104/LIFT/python-code/Utils/networks/eccv_combined.py", line 78, in init verbose=verbose) File "/home/hoo104/LIFT/python-code/Utils/networks/eccv_base.py", line 121, in init self.buildLayers(verbose=verbose, **kwargs) File "/home/hoo104/LIFT/python-code/Utils/networks/eccv_combined.py", line 105, in buildLayers not bTestWholeImage), verbose=verbose) File "/home/hoo104/LIFT/python-code/Utils/networks/eccv_combined.py", line 158, in buildLayersKp detector_module.build(self, idxSiam, verbose) File "/home/hoo104/LIFT/python-code/Utils/networks/detector/tilde.py", line 212, in build eps=myNet.config.epsilon) File "/home/hoo104/LIFT/python-code/Utils/lasagne_tools.py", line 97, in createXYZMapLayer scale_space_min = fScaleList[new_min_idx] TypeError: list indices must be integers or slices, not numpy.float64 Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------


Test Data Module


Traceback (most recent call last): File "compute_orientation.py", line 107, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 129, in load_data_for_set kp = np.asarray(loadKpListFromTxt(kp_file_name)) File "/home/hoo104/LIFT/python-code/Utils/kp_tools.py", line 196, in loadKpListFromTxt kp_file = open(kp_file_name, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_kp.txt' Parameters

--------------------------------------------------------------------------------------

Be careful as we do not use sophisticated parsing

The parser will read until the semicolon

types are defined as below

ss: multiple strings separated with commas

s: string

b: boolean

f: float

d: int

--------------------------------------------------------------------------------------

--------------------------------------------------------------------------------------

Dataset parameters

ss: dataset.trainSetList = ECCV/piccadilly/; # All the first images of oxford dataset is used for training s: dataset.dataType = ECCV; # the dataset type

--------------------------------------------------------------------------------------

Model parameters

s: model.modelType = Combined; # the network type b: model.bNormalizeInput = 1; # Normalize input to have zero mean and 1 std f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale)

GHH related

f: model.max_strength = -1; # GHH softmax strength (-1 for hard)

Keypoints

s: model.sDetector = tilde; # network architecture for kp d: model.nFilterSize = 25; # Let's keep the number odd d: model.nPatchSizeKp = 48; # patch size for kp including moving

regions we use something smaller

s: model.sKpNonlinearity = None; # use nonlinearity at end f: model.fScaleList = np.array([1.0]); # list of scale spaces (small means larger scale) f: model.bias_rnd = 0.0; # random noise added to bias f: model.epsilon = 1e-8; # epsilon for tilde

Orientation

s: model.sOrientation = cvpr16; # network architecture for orientation

Descriptor

s: model.sDescriptor = descriptor_imported; # network architecture for desc d: model.nDescInputSize = 64; # Input size to be fed to the descriptor

s: model.descriptor_export_folder = /cvlabdata1/home/trulls-data/kpdesc/torch/export/;

L-> skipped s: model.descriptor_model = new-CNN3-picc-iter-56k.h5; # network configuration s: model.descriptor_input = desc-input; # Descriptor input

--------------------------------------------------------------------------------------

Parameters for patch extraction

automatically determined

f: patch.fMaxScale = np.max(self.model.fScaleList); # asserts make sure this is stored properly f: patch.fRatioScale = (self.model.nPatchSizeKp / 2.0) / 2.0; # to not resize when scale is 2 d: patch.nPatchSize = np.round(self.model.nDescInputSize * self.patch.fRatioScale / 6.0); # the large patch size for data. the desc will look at ratio scale of 6.0

--------------------------------------------------------------------------------------

Validation and test time parameters

d: validation.batch_size = 100; # batch size of the implementation

d: validation.nScaleInterval = 4;

L-> skipped

d: validation.nNMSInterval = 2; # number of intervals we look for

L-> skipped

NMS (3 would mean it goes up one

octave in case of

nScaleInterval=2)

---------------------------------------------------------------------------------


Test Data Module


Traceback (most recent call last): File "compute_descriptor.py", line 111, in test_data_in = data_module.data_obj(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 64, in init self.load_data(param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 82, in load_data pathconf, param, image_file_name, kp_file_name) File "/home/hoo104/LIFT/python-code/Utils/dataset_tools/test.py", line 129, in load_data_for_set kp = np.asarray(loadKpListFromTxt(kp_file_name)) File "/home/hoo104/LIFT/python-code/Utils/kp_tools.py", line 196, in loadKpListFromTxt kp_file = open(kp_file_name, 'rb') FileNotFoundError: [Errno 2] No such file or directory: '/home/hoo104/LIFT/results/img1_ori.txt'

hanghoo commented 7 years ago

@yiran-THU @djidan10 @navia1991 Hello, I think maybe you can help me to solve this problem. Thank you very much.

hanghoo commented 7 years ago

I have solved the problems. Refer to : https://github.com/cvlab-epfl/LIFT/issues/4 I know the software need work on python2. So, I install anaconda2, and solve it.

hanghoo commented 7 years ago

As a result, I get the img_kp.txt, img1_ori.txt and img1_desc.h5. How can I obtain the match like below: 8

kmyi commented 7 years ago

Hi,

You would need to do some matching scheme with the keypoints and descriptors. Please see the benchmark_orientations repo for example on how to use it.

hanghoo commented 7 years ago

Thanks for your reply. @kmyid

YuYuYan commented 7 years ago

@Hoototo have you try to use benchmark_orientations to realize the match?

YuYuYan commented 7 years ago

@kmyid I get the img_kp.txt, img1_ori.txt and img1_desc.h5. but i do not know the direct relationship between these documents. and the img_kp_txt_scores.h5 file shows -Infinity. thanks

hanghoo commented 7 years ago

Hi, @YuYuYan I want to use benchmark_orientations to realize the match. But I am not have enough time to do this work now.

YuYuYan commented 7 years ago

@Hoototo Thanks for your reply. And I want to realize the match,but I do not know if I can succeed.

hanghoo commented 7 years ago

@YuYuYan I think you can do it. kmyid say that you can see the benchmark_orientations repo for example on how to use it. https://github.com/cvlab-epfl/learn-orientation https://github.com/cvlab-epfl/benchmark-orientation

YuYuYan commented 7 years ago

ok,and now I will try to do it,good luck to me , thanks again.

NEU-Gou commented 7 years ago

@YuYuYan @Hoototo Just a quick search on google. All you need is reading .h5 with hdf5read() in matlab (I believe there is other package for Python as well). keyPt = hdf5read('img1_desc.h5','/keypoints'); desc = hdf5read('img1_desc.h5','/descriptors'); The first two rows in keyPt are x and y coordinates.

Hope this is helpful.

hanghoo commented 7 years ago

@NEU-Gou Thank you very much.

YuYuYan commented 7 years ago

@Hoototo @kmyid @NEU-Gou hi , I need your help now I try to run the benchmark_orientations ,and I run run_evaluate.m ,but there was an error----- from parse import parse . ImportError: No module named parse. my python 2.7.13 and I try to import parse in python,it worked. but the error still Thanks,thanks,thanks

YuYuYan commented 7 years ago

"If we have an error from python execution, retry five times and spit error as the error might just be from theano compile issues... " here is the error

kmyi commented 7 years ago

Hello, please check the requirements session. You are missing parse module for python.

YuYuYan commented 7 years ago

But I have import the parse with python,and it succeed

kmyi commented 7 years ago

Probably you have some environment issues with virtual environments and such then... cannot help you more about that as it is related to your setup.

YuYuYan commented 7 years ago

Ok I will try it again.Thank you .Thank you for reply.你非常棒!佩服你!

YuYuYan commented 7 years ago

@NEU-Gou Thanks for your reply . Now I want to know use this: keyPt = hdf5read('img1_desc.h5','/keypoints'); The first two rows in keyPt are x and y coordinates. But what the other rows means?

hanghoo commented 7 years ago

Hi, @NEU-Gou I use your method to read .h5 file. In keyPt, the other rows that exclude the first two rows means what? matlab_read

NEU-Gou commented 7 years ago

@Hoototo @YuYuYan I guess those are related to the scale/rotation of the key points. @kmyid If you guys can provide a little bit more details, that will be really appreciated.

kmyi commented 7 years ago

Hi guys, you can have a look at kp_tools.py for details on this. sorry for the late reply.

zhuoligetu123 commented 5 years ago

I have solved the problems. Refer to : #4 I know the software need work on python2. So, I install anaconda2, and solve it.

学长您好,不知道是否毕业,可以请教一下最后怎么解决的吗

hanghoo commented 5 years ago

你好,请问有什么问题?

kmyi commented 5 years ago

Please do not comment in Chinese...

Kwang Moo YI Assistant Professor Dept. of Computer Science, University of Victoria +1 (250) 472 5837

On Jan 24, 2019, at 09:09, Nick Hoo notifications@github.com wrote:

你好,请问有什么问题?

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub, or mute the thread.

hanghoo commented 5 years ago

Sorry, Sir. thanks for your reminding.

zhuoligetu123 commented 5 years ago

你好,请问有什么问题?

我也遇到了您提问的问题,不知道怎么解决,求指点 IOError: [Errno 2] No such file or directory '/home/zhuoli/LIFT-master/results/img1_ori.txt' ,方便加我QQ吗,3332146855,感激不尽

hanghoo commented 5 years ago

@zhuoligetu123 Hi, I think you need to install python2 and have a try.