Open LifeBeyondExpectations opened 6 years ago
@LifeBeyondExpectations @YangHM same problem? Do you know how to plot now?
A bottleneck layers with only two units may solve it. The feature will plot into 2-D space, then we can visualize them. In nets.py, the last but one layers is the bottleneck layers. It is commonly used in toy model to visualize the ability of a new loss function.
@fjchange But I just tried to train the model with a bottleneck layer(2 dimensions) on MNIST by softmax+cross entropy loss, it seems can't converge. The accuracy keeps at ~11.35.
A bottleneck layers with only two units may solve it. The feature will plot into 2-D space, then we can visualize them. In nets.py, the last but one layers is the bottleneck layers. It is commonly used in toy model to visualize the ability of a new loss function.
You are right. The feature layer only has two nodes, thus the extracted features by the CNN are two dimensional. For the two dimensional features, we can directly plot them for visualization. I found this method on other works, so l follow them to conduct the same toy experiments.
@fjchange But I just tried to train the model with a bottleneck layer(2 dimensions) on MNIST by softmax+cross entropy loss, it seems can't converge. The accuracy keeps at ~11.35.
I don't know the correct definition of the "bottleneck layer". In our work, we directly set the feature layer (second to last layer) with only two nodes. For the convergence problem, there may be different reasons. One main reason is the setting of the hyper-parameters, the other reason is the data used, we don't use the raw data but with some pre-processing. Please read the README.md of this repository for more information about the data.
@fjchange But I just tried to train the model with a bottleneck layer(2 dimensions) on MNIST by softmax+cross entropy loss, it seems can't converge. The accuracy keeps at ~11.35.
I don't know the correct definition of the "bottleneck layer". In our work, we directly set the feature layer (second to last layer) with only two nodes. For the convergence problem, there may be different reasons. One main reason is the setting of the hyper-parameters, the other reason is the data used, we don't use the raw data but with some pre-processing. Please read the README.md of this repository for more information about the data.
Then it makes sense since I used raw MNIST dataset without pre-processing. Thank you!
Do you have the program to plot the 2-D feature space of the sample? I have no ideal about how to draw it
@yangrunz in my opinion, you just need to use addtional conv layer, which output dim is shape2. And use the output to plot 2-D feature
ok,l will take a try .Thanks for your reply.
发自我的iPhone
------------------ Original ------------------ From: sql notifications@github.com Date: Fri,Aug 16,2019 4:36 PM To: YangHM/Convolutional-Prototype-Learning Convolutional-Prototype-Learning@noreply.github.com Cc: yangrunz 905342110@qq.com, Mention mention@noreply.github.com Subject: Re: [YangHM/Convolutional-Prototype-Learning] How can I plot feature representation into 2D spaces? as in figures? (#2)
@yangrunz in my opinion, you just need to use addtional conv layer, which output dim is shape2. And use the output to plot 2-D feature
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Does paper use t-sne or PCA ?? ..