Closed yunxiaoliCB closed 11 months ago
Hi, sorry for the late update.
You can find the latest update of our paper and code here: https://x.com/Oxer22/status/1717167378067316854?s=20 https://arxiv.org/abs/2303.06275 https://github.com/DeepGraphLearning/ESM-GearNet
For your question, yes. Using the output of GearNet concatenated with the output of ESM-1b as final representations is the best.
Thank you for pointing me there. KUDOs on the new release!
Hi, I've learned a lot from this great work. Thank you for presenting it in the paper and here!
I wanted to ask about implementation of series connection of PLM & GNN in the FusionNetwork. In the PLM+GNN paper ( Zhang, Z. et al. Enhancing Protein Language Models with Structure-based Encoder and Pre-training. Arxiv (2023) doi:10.48550/arxiv.2303.06275), the authors tested three ways of fusing PLM & GNN and decided to use the series connection. The series connection is described as
In the implementation of FusionNetwork. I saw it indeed uses the output of ESM-1b as the node features of GearNet, but then seems to use the output of GearNet concatenated with the output of ESM-1b as final representations (pasted below). So which is the way that the authors found most effective? Shall one use sole output from GearNet or the concatenated output?
If possible, could you please share some configurations on trying out the "cross" style (quote below) of fusing PLM & GNN? I am interested in testing this option and wanted to learn about the configurations of the transformer (number of layers, hidden dims, number of head) that you have tried.