Closed AlexeyAB closed 4 years ago
Actually, we have tested the latency of Efficient-B0. The latency is too large (~98ms) which cannot be put inside the current chart.
In addition, we have also tested the latency of MixNet (https://github.com/AlexeyAB/darknet/issues/4503) and the latency is also too large (>85ms). The various kernel size in the same depthwise conv layer is harmful for the inference speed.
mini_batch_size=1024
for training on 8 GPUs.@iamhankai Thanks.
So GhostNet looks much more promising.
Did you compare latency (ms) of GhostNet vs MobileNetV3 vs MNasNet on GPU or TPU?
Also did you compare Accuracy/latency (ms) of these models with models PeleeNet, SNet, DenseNet or at least ResNet18?
As I understand, GhostBlock is just Conv2D
+ depthwise_conv2d
+ concat
?
As I understand, GhostBlock is just
Conv2D
+depthwise_conv2d
+concat
?
Yes. With these efficient operators, GhostNet can be simple yet fast.
@iamhankai Thanks for your answers and SOTA network!
stride = 2
?
out_channel
but never use it? https://github.com/iamhankai/ghostnet/blob/47ef752446ba761dc5342ce06cbc26537b038289/myconv2d.py#L29Conv2D(1280 filters)
-layer after slim.avg_pool2d
-layer: https://github.com/iamhankai/ghostnet/blob/47ef752446ba761dc5342ce06cbc26537b038289/ghost_net.py#L218-L234
compared to:
@iamhankai Hi,
Great work!
Why did you exclude EfficientNetB0 (0.390 BFlops - 76.3% Top1) from Accuracy-Latency chart?
Also what
mini_batch_size
did you use for training GhostNet?