Graylab / IgFold

Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies
Other
327 stars 61 forks source link

training problem #4

Closed LDAIprotein closed 1 year ago

LDAIprotein commented 2 years ago

This is a good project. I'm learning an end-to-end prediction, trying to use your loss calculation method. I describe my understanding of loss calculation: First, extract the first four of the 'N', 'CA', 'C', 'CB', 'O' coordinates from the native pdb, and ipa_coords (the coordinate matrix generated by the module structure_ipa) , which is aligned with the kabsch_mse function to generate coords_loss combined with other losses for gradient descent. This is an excellent job and I would appreciate it if you could fix this for me!

jeffreyruffolo commented 2 years ago

Hi, your understanding sounds correct.

To calculate a loss for IgFold, you'll need to load an individual model rather than use the IgFoldRunner. You can do this by running the following, where ckpt_file is one of the four checkpoint files downloaded when IgFoldRunner is initialized.

IgFold.load_from_checkpoint(ckpt_file)

From here, you should be able to calculate a loss by providing coords_label as part of the IgFoldInput to the model's forward function. The coords_label tensor should be of dimension 1, length of Ab, 4, where 4 corresponds to the N, CA, C, CB coords as you mentioned. You should also provide a batch_mask if there are any missing atoms in your label so that these can be ignored for alignment and loss calculation.

Let me know if that helps!

LDAIprotein commented 2 years ago

Thank you very much for answering my question. I've tried the method you mentioned these days, and I'm reproducing the training code, but I'm running into a problem. In order to solve this problem, I thought of a lot of methods but not enough. I would be very grateful if you could help me.

I use a protein for training and testing, and use this protein for training and test sets at the same time. Ideally, of course, the loss (coords_loss) will become smaller and smaller, and it will overfit. However, the coords_loss I trained did suddenly become large and small. I checked back and forth 2-3 times and found that the code should be fine. If the code is ok, I will analyze the cause. I came up with an idea. When calculating coords_loss, I use kabsch_mse (rotate targets to minimize the difference between their coordinates), but backpropagation through gradient descent changes the error this time, but next time use kabsch_mse (which also minimizes the gap), but where it may have a large error may be where the last modification of the gradient descent changed, because each time the kabsch_mse rotation matrix and translation are different.

I found a small problem, that is to load hparams.config, the key is still config, the correct use is hparams.config["config"].