issues
search
lucidrains
/
point-transformer-pytorch
Implementation of the Point Transformer layer, in Pytorch
MIT License
592
stars
58
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
part segmentation
#16
Lizhinwafu
opened
2 years ago
0
Did You Falsify Your Experimental Results???
#15
TruthIsEveryThing
closed
2 years ago
1
Scalar attention or vector attention in the multi-head variant
#14
ZikangZhou
opened
2 years ago
2
point clouds with different number of points
#13
1999kevin
closed
2 years ago
0
Can you provide a multi-head version of point transformer layer?
#12
HankerWu
closed
2 years ago
3
Issues with my wrapper code
#11
StellaAthena
closed
3 years ago
1
Can you provide the full training code and pretained models?
#10
densechen
opened
3 years ago
0
The layer structure and mask
#9
ayushais
opened
3 years ago
1
Invariant to cardinality?
#8
decadenza
opened
3 years ago
0
请问有完整代码吗?
#7
swzaaaaaaa
opened
3 years ago
2
请问,论文中的self-Attention对基数(cardinality)不变,这怎么理解?
#6
swzaaaaaaa
opened
3 years ago
0
Cost too much memory
#5
JLU-Neal
opened
3 years ago
9
Transition Up Module
#4
072jiajia
opened
3 years ago
3
different attention map for each element of feature dimension
#3
lucidrains
closed
3 years ago
0
The shape of vector attention map
#2
Liu-Feng
opened
3 years ago
6
Would you provide full model, training pipeline, and pre-trained weights?
#1
jimmy15923
opened
3 years ago
0