issues
search
lucidrains
/
alphafold2
To eventually become an unofficial Pytorch implementation / replication of Alphafold2, as details of the architecture get released
MIT License
1.54k
stars
254
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
How to adjust the recycling times in model?
#109
Syaoran036
opened
1 year ago
0
CVE-2007-4559 Patch
#106
TrellixVulnTeam
opened
1 year ago
0
read me dimensions are off. Example to run alpha fold 2 msa, and seq lengths should be the same. One is 128, the other is 120.
#104
ofTarradiddle
opened
2 years ago
0
How can I get the coordinates of all the atoms?
#103
RayTang88
opened
2 years ago
0
outerMean module question
#102
ifromeast
opened
2 years ago
1
__init__() got an unexpected keyword argument 'structure_module_type'
#101
ifromeast
opened
2 years ago
4
Can AlphaFold2 predict the structure of cyclic peptides?
#98
JustinDoIt
opened
3 years ago
1
[todo] ensembling
#97
lucidrains
opened
3 years ago
0
Make sure random_tokens is on correct device
#96
aced125
closed
3 years ago
0
Typo?
#95
CiaoHe
closed
3 years ago
3
Example script of an end-to-end prediction from FASTA sequence to 3D pdb
#94
h-midlothian
closed
3 years ago
2
[todo] structure module
#93
lucidrains
opened
3 years ago
3
Missing adding the `relpos` to pairwise-repr `x`
#92
CiaoHe
closed
3 years ago
1
Maybe duplicate residual addition
#91
CiaoHe
closed
3 years ago
1
Should be first MSA-attention then Pairwise-Attention
#90
CiaoHe
closed
3 years ago
1
pairwise_repr should be used in rowAttention
#89
CiaoHe
closed
3 years ago
1
Maybe Inverse of the outgoing-attention and ingoing-attention here
#88
CiaoHe
closed
3 years ago
1
TypeError: __init__() got an unexpected keyword argument 'num_backbone_atoms'
#87
jackli777
opened
3 years ago
3
OuterMean block's typo
#86
CiaoHe
closed
3 years ago
4
The definition of row-wise attention and col-wise attention
#85
CiaoHe
closed
3 years ago
1
MSA attention gated problem
#84
CiaoHe
closed
3 years ago
1
ModuleNotFoundError: No module named 'invariant_point_attention'
#83
jackli777
closed
3 years ago
2
[todo] positional encoding
#82
lucidrains
closed
3 years ago
1
[todo] checkpointing
#81
lucidrains
opened
3 years ago
3
[todo] BERT loss on MSA
#80
lucidrains
opened
3 years ago
4
[todo] FAPE loss
#79
lucidrains
closed
3 years ago
2
[todo] recycling
#78
lucidrains
closed
3 years ago
1
[todo] embeddings
#77
lucidrains
closed
3 years ago
1
problem with alphafold2/train_pre.py
#76
jackli777
opened
3 years ago
0
Dev
#75
CiaoHe
closed
3 years ago
0
Add simple training script
#74
superantichrist
opened
3 years ago
2
no coords_3d in train_end2end.py
#73
moonblue333
closed
3 years ago
2
Integrate newest changes
#72
hypnopump
closed
3 years ago
1
the dimensions of coordinates in CASP12
#71
zhangyi-taotao
closed
3 years ago
3
demonstrate sidechain
#70
hypnopump
closed
3 years ago
1
Fix utils for better training
#69
hypnopump
closed
3 years ago
2
solve mds problems?
#68
hypnopump
closed
3 years ago
5
MDScaling isn't working right!
#67
xiongzhp
closed
3 years ago
3
add new results from @lhatsk
#66
lucidrains
closed
3 years ago
0
fix tests
#65
lucidrains
closed
3 years ago
0
solving adjmat and a couple issues in utils
#64
hypnopump
closed
3 years ago
0
fix row scaling dtype
#63
lucidrains
closed
3 years ago
0
allow for tied row attention with uneven number of MSAs per sequence …
#62
lucidrains
closed
3 years ago
2
Basic pretrained models + quickstart instructions?
#61
TylerBalsam
opened
3 years ago
1
fix mds tensors not in same device
#60
hypnopump
closed
3 years ago
1
Add PL Lightning to Enable Distributed Training and Deep Speed
#59
aribornstein
opened
3 years ago
3
Add trrosetta dataset
#58
blazingsiyan
closed
3 years ago
4
test error in utils
#57
lucidrains
closed
3 years ago
1
MSA tensor format
#56
panganqi
opened
3 years ago
5
fix embeddings with esm + msa mask
#55
hypnopump
closed
3 years ago
0
Next