issues
search
lucidrains
/
se3-transformer-pytorch
Implementation of SE3-Transformers for Equivariant Self-Attention, in Pytorch. This specific repository is geared towards integration with eventual Alphafold2 replication.
MIT License
262
stars
23
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
trainnin pointcloud data
#19
xins981
opened
1 year ago
0
Backup Data
#18
danielajisafe
opened
1 year ago
0
CUDA out of memory
#17
PengCheng-NUDT
opened
2 years ago
2
small bug
#16
MattMcPartlon
closed
3 years ago
5
denoise.py bugfix
#15
javierbq
closed
3 years ago
0
SE3Transformer constructor hangs
#14
mpdprot
opened
3 years ago
1
allow one to set their own path to cache directory with environmental…
#13
lucidrains
closed
3 years ago
0
vector gating
#12
lucidrains
closed
3 years ago
0
Linear layer
#11
abdalazizrashid
opened
3 years ago
1
INTERNAL ASSERT FAILED
#10
denjots
closed
3 years ago
4
Whether SE3 needs pre-training
#9
zyk19981118
opened
3 years ago
2
faster loop
#8
hypnopump
closed
3 years ago
1
Question about continuous edge features
#7
MattMcPartlon
closed
3 years ago
1
How to populate input variable length data
#6
zyk19981118
closed
3 years ago
8
question about non scalar output
#5
Chen-Cai-OSU
closed
3 years ago
3
Reversible flag odd results
#4
denjots
closed
3 years ago
3
Breaking equivariance
#3
brennanaba
closed
3 years ago
18
CPU/CUDA masking error
#2
denjots
closed
3 years ago
8
multiple molecules cases
#1
thegodone
opened
3 years ago
2