issues
search
lucidrains
/
segformer-pytorch
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch
MIT License
342
stars
43
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
where are the pre-training weights?
#14
TheWangYang
opened
7 months ago
0
pretrain weight
#13
Philos01
opened
12 months ago
0
how to use pre_trained weights?
#12
dmndxld
opened
1 year ago
0
Create 1.py
#11
Smartzzh
opened
1 year ago
0
batchNorm or layerNorm?
#10
Napier7
opened
2 years ago
0
how 2 ouput origin h w size?
#9
York1996OutLook
opened
3 years ago
2
Something is wrong with your implementation.
#8
camlaedtke
opened
3 years ago
0
Why you use InstanceNorm instead of LayerNorm?
#7
takfate
closed
3 years ago
1
patch size not used
#6
isega24
closed
3 years ago
1
Models weights + model output HxW
#5
isega24
opened
3 years ago
2
The model configurations for all the SegFormer B0 ~ B5
#4
rose-jinyang
opened
3 years ago
5
a question about kv reshape in Efficient Self-Attention
#3
masszhou
opened
3 years ago
1
i find the decoder which in your implementation is conv2d,and it is different with MPLDecoder which used in segformer paper ?
#2
AncientRemember
opened
3 years ago
7
are the imgsize parameter of MixVisionTransformer is necessary?
#1
AncientRemember
closed
3 years ago
1