Open HustAIsGroup opened 7 years ago
Thanks for your interest. In fact I should put more comments and and I will. For a quick response, please read the paper: "selective transfer machine for personalized facial action unit detection" http://humansensing.cs.cmu.edu/wschu/papers/doc/cvpr13-stm.pdf which also mentions KMM in a simplified explanation. The example in my codes somewhat follows the toy example in their paper, illustrated in Fig. 2. The idea is to reweight source samples such that their weighted mean is better aligned versus few destination examples.
I am very glad to receive your reply.Thanks a lot,it is very helpful to me.I have read the paper you recommend.As I am not a math major,so I still have two small questions as follow:
1.In your code ,the x and y of the training and testing data are two features or x is feature and y is label?
2.In your code,the data set Z and the X are representing train set and test set respectively which use the same function y = x*2 + 10np.random.random(200) - 5. Can I think the train data and the test data are in same distrubution? But the KMM is used to solve the problem of the different distribution in train and test.Maybe I have wrong understanding.
I am sorry to bother you and look forward to your reply. Thank you very much
From my understanding, we are using two different distributions to produce the datasets Z and X. Here, the xs and ys seem to represent the datasets along the two different dimensions x and y where y is dependent on x.
I am interested in KMM,can you explain the example in your code?And say something about where is the Theorem which map your code in the reference?Thank you very much!