Hi, I would like to report an issue about the contrast functions in lib/fixedpointalg.m (lines 27--37): you are using g and g' in the FastICA iteration, whereas---according to Hyvarinen's and Francesco Negro's papers---you should use g' and g''. Therefore, what you are referring to as square is actually the skewness:
% G_skew(x) = x.^3 / 3
gp = @(x) 2*x; % 2nd derivative
g = @(x) x.^2; % 1st derivative
whereas what you are referring to as skew is actually the kurtosis:
% G_kurt(x) = x.^4 / 4
gp = @(x) 3*x; % 2nd derivative
g = @(x) x.^3; % 1st derivative
In the case of logcosh, what you are actually maximising is the anti-derivative of logcosh (which may make sense mathematically, but it is not commonly used for ICA and its effectiveness has not been proven). A better choice is to maximise the actual logcosh:
% G_logcosh(x) = log(cosh(x))
gp = @(x) 1 - tanh(x).^2; % 2nd derivative
g = @(x) tanh(x); % 1st derivative
Hi, I would like to report an issue about the contrast functions in
lib/fixedpointalg.m
(lines 27--37): you are using g and g' in the FastICA iteration, whereas---according to Hyvarinen's and Francesco Negro's papers---you should use g' and g''. Therefore, what you are referring to assquare
is actually the skewness:whereas what you are referring to as
skew
is actually the kurtosis:In the case of
logcosh
, what you are actually maximising is the anti-derivative of logcosh (which may make sense mathematically, but it is not commonly used for ICA and its effectiveness has not been proven). A better choice is to maximise the actual logcosh:You can verify the logcosh implementation by checking the Scikit-learn's library.