Closed Yanara-Tian closed 1 year ago
Thx for your interest in our work!
crop_func
and noise_func
. Then, after pre-training, we keep the encoder for downstream tasks. This means, if you have a new protein, you should feed it directly to the encoder. You don't need to feed it into MultiviewContrast
and get views for it. MultiviewContrast
is only used for pre-training.Thank you very much for your detailed answer, I benefited a lot.
For the third question, you mean that MultiviewContrast is only for training to get a good protein encoder, but we only need to use this encoder when encoding the protein to the downstream tasks. In this model, the encoder is GearNet.
Yes.
thank you very much, your work is very excellent and I have learned a lot.
Feel free to send me an email (zuobai.zhang@mila.quebec) if you have other questions~
ok, thank you best wishes!
Hello, because my code understanding ability is not very strong, I have a little problem in understanding the model: (Because I am very interested in your work, I am sorry to have a lot of questions~) Refer to the mc_gearnet_edge.yaml file, the Multiview Contrast in the model is followed by a multi-layer perceptron. However, the output in Multiview Contrast is divided into output1 and output2 consisting of graph features and node features, but there is only one input in MLP. 1) I would like to ask what is the input in MLP? 2) what is the model in the MultiviewContrast module? [["def init(self, model, crop_funcs, noise_funcs, num_mlp_layer=2, activation="relu", tau=0.07): super(MultiviewContrast, self).init()"]] is it GeometryAwareRelationalGraphNeuralNetwork? 3) In addition, which step did you obtain the new graph based on contrast learning mentioned in your article?(because the MultiviewContrast module has two outputs results, I don't know which one is better)
Looking forward to your reply very much!