musyoku / adversarial-autoencoder

Chainer implementation of adversarial autoencoder (AAE)
258 stars 76 forks source link

A question about cluster heads cost #5

Open hldwc opened 7 years ago

hldwc commented 7 years ago

Hello! Recently, I am studying one of your code about adversarial-autoencoder. I don't know the way that you define the starting labels and ending labels which appears in aae_dim_reduction.py. Could you tell me why you define them like that?

musyoku commented 7 years ago

Hi. Naive implementation of computing the Euclidean distance between every two cluster heads will be as follows. aae_head_cost_1

But there are duplicate values.

aae_head_cost_2

Required values will be as follows, and starting_labels and ending_labels are defined as

aae_head_cost_3

:smile:

nianzu-ethan-zheng commented 6 years ago

I try to the code , but i find the culster head loss doesn't help the classfication accuracy?? and the last picture you post seems that the same digits are sperated different clusters, i guess ,the accuracy is not good ,so , i think , maybe there is something wrong?

musyoku commented 6 years ago

I think that the cluster head loss does not contribute to improving classfication accuracy. It is used to increase the distance between clusters. scatter_r

musyoku commented 6 years ago

In unsupervised learning, accuracy is not good. clusters

I'm not sure if it is a bug.

nianzu-ethan-zheng commented 6 years ago

but according to the paper, the classification error has reached 4.2% and 6.08% with 1000 and 100 labels respectively, My test result is 10% with 1000 lables ,just like the picture below . and for 100 labels , just a mess. It's not seem good !!

A fixed rotated transform matrix has been tried, the result is also not ideal.

but the classification error is not as good as the paper , so why??

musyoku commented 6 years ago

I think that it is due to the difference in implementation. I don't know how the authors implemented it, so it is difficult to reproduce perfectly