Closed euhruska closed 6 years ago
Actually the paper mentions vamp1+2, but focuses on vamp2. Andreas and Luca had best results by using a combination of two losses - they can comment on this. In principle both vamp1+2 have the same optimum, but that doesn't mean it's equally easy to find. The third loss, vamp-E (I don't know if that's "loss" in the code) is theoretically very pleasing because you don't need to specify the number of processes, but in practice we haven't had much success with that yet.
Am 30/08/18 um 10:12 schrieb Eugen Hruska:
The vampnet/examples calculates 3 different loss functions, vamp1, vamp2 and loss, but the paper mentions only vamp2. What is better to use?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/markovmodel/deeptime/issues/19, or mute the thread https://github.com/notifications/unsubscribe-auth/AGMeQrird7MLuDP54EmKmYlXXzffQe7Pks5uWADHgaJpZM4WTrPK.
--
Prof. Dr. Frank Noe Head of Computational Molecular Biology group Freie Universitaet Berlin
So in the code there are 3 losses, which are loss_VAMP (VAMP1), loss_VAMP2 (VAMP2), and _loss_VAMP_sym (a symmetriesed covariance matrices version of VAMP1). Basically, you can just use 1 loss as Frank mentioned. In our experience in order to get a nice state separation, the best way is using a combination of these losses (see the ala example, if you observe the loss over training, you can see how each change of loss improves the result further). The best results for our applications were using the following order of losses: 1.VAMP2
Hope this helps!
Best, Andreas
Thank you
The vampnet/examples calculates 3 different loss functions, vamp1, vamp2 and loss, but the paper mentions only vamp2. What is better to use?