FabianFuchsML / se3-transformer-public

code for the SE3 Transformers paper: https://arxiv.org/abs/2006.10503
475 stars 69 forks source link

Lie Transformer #7

Closed lucidrains closed 3 years ago

lucidrains commented 3 years ago

Hi Fabian!

Thank you for your great work, and for your explanatory blogpost speculating on how Alphafold2 is using equivariance.

I had been set on working with SE3-Transformers when a new paper came out last week https://arxiv.org/abs/2012.10885 . I was wondering if you think there could be the possibility that Alphafold2 used this instead. Could you possibly evaluate this method against what is used in SE3-Transformers (they make the claim that theirs is the better approach)? Any advice, thoughts, comments would be appreciated

Phil

FabianFuchsML commented 3 years ago

Hey Phil!

Thank you for your interest and the kind words!

This is the question of whether to use irreducible representations or regular representations. In the blog post you are referring to, we say the following:

"Irreducible representations are the basis of multiple works in this area, particularly of 3D Steerable CNNs and Tensor Field Networks. The SE(3)-Transformer is heavily based on both of these. Another set of works uses regular representations. If you want to learn about these, you could read this blog post about Taco Cohen’s seminal work from 2016, or this 2020 paper from Marc Finzi."

The paper you linked by Michael Hutchinson et al came out after we wrote the blog, but it very much fits into that last sentence, as it also works with regular representations. From a timeline perspective, I think it's a bit more likely that Alphafold 2 used irreducible representations, but that's just my guess.

Both Marc's and Michael's papers are fantastic. To find out whether irreducible representations or regular representations work better for your specific use case, just try both! Note that if you do something similar to Alphafold 2, the task will be equivariant rather than invariant, so I wouldn't put too much weight on the QM9 results.

lucidrains commented 3 years ago

@FabianFuchsML Thank you Fabian! I think I will go ahead and learn / construct both. And thank you for working in a field that may have played a pivotal role in what may be one of the world wonders in machine learning!