dauparas / ProteinMPNN

Code for the ProteinMPNN paper
MIT License
934 stars 284 forks source link

Discrepancy between paper and code on attention module #45

Closed MaoSihong closed 1 year ago

MaoSihong commented 1 year ago

i did not see a implementation of dot-product attention in proteinMPNN code. according to proteinMPNN paper, i think it should be a dot-product attention just like canonical transformer. so the code implement additive attention instead?Did i miss something in code or it's a dismatch between code and paper desciption?