Closed nishadi closed 4 years ago
Hi, the implementations of TCA are correct. You can compare it with the code of JDA. If you have a singular matrix, maybe you can try to add some Gaussian noise or tune the hyperparameters to avoid that.
Thanks a lot for getting back. Will do that. Another quick question. Why do we normalize the matrix M?
Hi, When I tried to use the testing data sets provided ('webcam_SURF_L10.mat' as the source and 'amazon_SURF_L10.mat' as the target), it still gives me inf
as eigen values? Is it acceptable?
Normalization is a common operation for our results. Only if it gives better results than no normalization.
I don't remember there's inf
values. But as long as the final results (accuracy) are good, it will be fine.
Thank you for the awesome repo. I am a PhD student at the Australian National University and new to transfer learning. I will be really happy to cite this repo in my future publications. I have a little clarification on the provided code for TCA.py.
In the paper by Pan, in order to find the transformation matrix, A, we need to find the leading eigenvectors of (KLK + λI)-1KHK.
However, in the implementation of TCA.py, the eigenvectors are found in solving the generalized eigenvalue problem;
I have several questions regarding these two approaches.
inf
as eigen values? (I am getting several eigen values as 'inf' probably because my KHKT is nearly a singular matrix)I am really grateful if you can help me with these questions.