Open JachyLikeCoding opened 10 months ago
Hi!
The reported f1 score during training is the classification f1 score, not the actual localization f1 score. We also found that the model with best validation loss typically performs better then the model with the best validation f1 score.
Regarding the sphereface loss, here is one config we used a while ago:
{
"identifier": "SiameseNet",
"network_config": {
"output_channels": 128,
"dropout": 0.157062,
"repeat_layers": 0,
"norm_name": "GroupNorm",
"norm_kwargs": {
"num_groups": 32
}
},
"train_config":{
"loss": "SphereFaceLoss",
"tl_margin": null,
"sf_margin": 6.0,
"sf_scale": 1.51195,
"miner": false,
"miner_margin": null,
"learning_rate": 7.914e-05,
"batchsize": 51,
"num_classes": 27
}
}
In general, this function might help you to create others: https://github.com/MPI-Dortmund/tomotwin-cryoet/blob/main/tomotwin/train_main.py#L570C5-L570C18
To be honest, this part of TomoTwin could be much more flexible.
May I ask how the num_classes is set for this? Doesn't it represent all the classes?
Best regards, Chi
You should set it to number of classes in the training set.
Dear Thorsten,
I have been using your provided dataset and configuration scripts for training, and I've achieved an F1 score of 0.90 after 300 epochs, which is even higher than the reported value in the paper. I am curious to know if the dataset provided by you is complete, or if there have been additional optimizations in the methodology that could contribute to this difference?
Secondly, I am interested in experimenting with the ArcFace and SphereFace loss functions, and I was wondering if you could provide the configuration files for these scenarios.
Thank you for your time and assistance. I look forward to your insights.
Best regards, Chi