Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
Hi, as se3-transfromer beginner, I want apply it to process point in order to get point-wise feature. I run successfully the first demo, but I don't understand the paramater means.
in inference, what means the mask param?
can you share point trainning demo or any information about right path appling this model to pointcloud?
Hi, as se3-transfromer beginner, I want apply it to process point in order to get point-wise feature. I run successfully the first demo, but I don't understand the paramater means.
in inference, what means the mask param?
can you share point trainning demo or any information about right path appling this model to pointcloud?