martin-danelljan / Continuous-ConvOp

The Continuous Convolution Operator Tracker (C-COT).
GNU General Public License v3.0
196 stars 83 forks source link

About the feature no #8

Closed lchm1990 closed 7 years ago

lchm1990 commented 7 years ago

Hi. In the bottom of the extract_features.m file, the normalization operation is done for the features. I could not understand this normalization operation, can you offer a reference or a website to help me make it clear. And I notice there is no performance degradation when commenting these codes, what role does this operation play? Thanks~

martin-danelljan commented 7 years ago

Hi. It simply normalizes each feature block (i.e. each element of the feature cell) with its L2 norm. There is a parameter in the runfile to turn this on and off. The performance impact of the normalization is not entirely clear, but it is generally convenient as other parameters can remain more or less constant if the set of features are changed. It also allows some basic waiting between the different feature blocks, which give them similar impact in the learning.

lchm1990 commented 7 years ago

Thank you~ @martin-danelljan. eh, I know this operation can improve the robustness of the system. In fact, I failed to understand the reason that normalizing the feature with respect to feature block size. Now, I know this operation can be interpreted as a feature block normalization method, but why don't the parameters (gparams.normalize_size, gparams.normalized_dim, and gparams.normalize_power in the codes) have the same value? The same parameters can make it easier to understand. Is there any research or study to prove these parameters setting in your Matlab codes can lead to better normalization performance?

martin-danelljan commented 7 years ago

The normalize_size and normalize_dim only take values 1 and 0 (on and off). These only sets the normalization factor. The normalize_power sets the "p" in the Lp-norm. I found these settings by trial-and-error basically. Although, I have not tried to tune normalize_power since 2 has worked out fine.