Open AlexeyAB opened 5 years ago
This is (another) exciting and impressive potential enhancement. I was wondering, @AlexeyAB - how do you find time to do all this stuff and then give it away for free - are you being paid to produce this stuff or are you a passionate hobbyist?
@AlexeyAB have you implemented the yolAct architecture in the instance segmenter.c ??
Hi @AlexeyAB,
I really need it. When do you think we can use this feature?
Thanks in advance.
Why not use https://github.com/dbolya/yolact ?
@zpmmehrdad suggest to use @LukeAI has suggest its good in fps and is implemented in pytorch
@abhigoku10 Hi,
Thanks for suggesting, I want to use it in C++.
@zpmmehrdad yes even i wanted to use c++ later i shifted my whole development environment pls let me know if come across anyother repo
CenterMask:Real-Time Anchor-Free Instance Segmentation: https://arxiv.org/pdf/1911.06667.pdf
Faster and better than YOLACT.
@AlexeyAB
YOLACT++: Better Real-time Instance Segmentation
Method | mAP | FPS |
---|---|---|
yoloact | 29.8% | 33.0 |
yoloact++ | 34.1% | 33.5 |
The different from yolact:
Add real-time Instance Segmentation:
Look at: https://github.com/pjreddie/darknet/blob/master/examples/instance-segmenter.c
YOLACT++ https://arxiv.org/abs/1912.06218v1
YOLACT (You Only Look At CoefficienTs): https://arxiv.org/abs/1904.02689v1 and https://github.com/dbolya/yolact
Add additional output-prototype-layer with full-resolution masks (
W x H x K
) withupsampling_layers=subsampling_layers
, where isK=32
number of prototype masksUse
(4+1+classes+K)*anchors
outputs of Detector instead of(4+1+classes)*anchors
. Then we do NMS or Fast-NMS. Then we do GEMM between output-prototype-layer[W x H][K]
and found N-bboxes[N][K]
to get[WxH][N]
masks for N found bboxes, and applylogistic_activation
.During training: to compute mask loss, we simply take the pixel-wise binary cross entropy between assembled masks M and the ground truth masks. We crop with the ground truth bounding box and divide Lmask by the ground truth bounding box area.
Test on: