Closed CiaoHe closed 3 years ago
Hi, Luci: Sorry, it's me again. Here, I was confused by the definition of row-wise and col-wise attention. https://github.com/lucidrains/alphafold2/blob/586792d738b9867d02b85a304c9b56091121b460/alphafold2_pytorch/alphafold2.py#L208-L209 https://github.com/lucidrains/alphafold2/blob/586792d738b9867d02b85a304c9b56091121b460/alphafold2_pytorch/alphafold2.py#L219-L220
Based on what I thought, the row w_x should be represented by (b h) w d, since once fetch one row, each row should have w(width) units.
w_x
(b h) w d
w
So, maybe here need an inverse of the above definition?
@CiaoHe you are right, this is not clear https://github.com/lucidrains/alphafold2/releases/tag/0.4.13 fixed!
Hi, Luci: Sorry, it's me again. Here, I was confused by the definition of row-wise and col-wise attention. https://github.com/lucidrains/alphafold2/blob/586792d738b9867d02b85a304c9b56091121b460/alphafold2_pytorch/alphafold2.py#L208-L209 https://github.com/lucidrains/alphafold2/blob/586792d738b9867d02b85a304c9b56091121b460/alphafold2_pytorch/alphafold2.py#L219-L220
Based on what I thought, the row
w_x
should be represented by(b h) w d
, since once fetch one row, each row should havew
(width) units.So, maybe here need an inverse of the above definition?