berenslab / morphvae

MorphVAE: Generating Neural Morphologies from 3D-Walks
GNU General Public License v3.0
11 stars 2 forks source link

example of classification inference #2

Closed cojocchen closed 2 years ago

cojocchen commented 2 years ago

Thank you for sharing the code. It is a nice work. I want to adopt this method for neuron morphology classification.

Now I am trying to reproduce the result on classifying M1 EXC neuron (table 3 of paper). As model file and inference code is not available. I tried to follow the steps in the paper to train the model and implement inference code my self. However, I have some difficulty in reproducing the result shown in "K-nearest neighbor classification on neuron rep.ipynb". The score is lower than expected. I am not sure which step goes wrong. I will keep debugging this. Meanwhile, it will be of great help if you could share more information for debugging purpose. Such as example of inference code (i.e. code generating neuron_latentrepresentation*.npy file) and/or model files.

Below is what I got on frac 1.0: On run 1 of model frac 1.0: Train score: 0.63125 Test score: 0.43333333333333335 On run 2 of model frac 1.0: Train score: 0.6375 Test score: 0.5166666666666667 On run 3 of model frac 1.0: Train score: 0.64375 Test score: 0.5

cojocchen commented 2 years ago

I found example of inference code in "Plot t-SNE of neural representations.ipynb", function save_r_T. However, the performance on M1 EXC is still not good. I will keep trying.

Meanwhile, could you give my some idea on why using model.Z instead of model.h for classification purpose? To my understanding, Z are randomly sampled vectors which will cause varied prediction results for the same data with the same model.

cojocchen commented 2 years ago

Solve the problem of M1 EXC performance. I also tried using model.h for classification. The performance is not bad. Your comments on chosing model.Z or model.h for classification will be appreciated.

philippberens commented 2 years ago

Thanks @cojocchen for your feedback - I will ping @Visdoom for concrete answers

Visdoom commented 2 years ago

Hi @cojocchen,

Apologies for the late reply! I am happy to hear you managed to get the code running. Would you mind sharing what the specific problem was that you had?

You are right with your interpretation that z is randomly sampled. In fact, it is the mean of 5 random samples where model.h parametrizes the mean direction in the spherical latent space. You make a good point that classifying on model.h would be deterministic, however, I chose to use model.z instead to regularize the classifier and make sure that representations generated in similar regions within the latent space map to the same class label. Nevertheless, I believe both approaches are valid and it might be interesting to compare them. In my work, I tried to bridge generative modeling and representation learning, but if classification performance is what you are optimizing for then classifying on model.h is a good idea.

I hope this clarifies your question. Feel free to reach out when other things are unclear. I will keep an eye out for gitHub pings!