Open AlexeyAB opened 4 years ago
@WongKinYiu Hi,
I added ghostnet.cfg.txt
so you can try to train it for 600 000 iterations with batch_size=192 (mini_batch_size=96).
It can be trained for 2 weeks on GeForce RTX 2070.
May be it can be fast on CPU/Neurochips (OpenCV-dnn).
@AlexeyAB thank you!
@WongKinYiu I just added dropout after avg-pooling. So if you already started training, you can download new cfg-file and continue training.
我在cpu度测试darnet-19 速度1.3秒一张图片,测试图片大小500*374,请问哪里出问题了
Do you build darknet with (OPENMP=1 AVX=1)? And which CPU do you use?
Do you build darknet with (OPENMP=1 AVX=1)? And which CPU do you use?
thanks,i will try update my code and try again
@AlexeyAB
top-1 1.5%, top-5 5.6%.
@WongKinYiu Thanks! ghostnet.cfg.txt - 1.5% Top1? is it near ~0 ? Can you share cfg/weights file?
@AlexeyAB
@WongKinYiu This repo may help: https://github.com/d-li14/ghostnet.pytorch
@iamhankai thank you very much.
@WongKinYiu I have tested with your cfg/weights. The result is almost same as yours. I started training few days ago with almost same your cfg except for batch and subdivisions. My result is top-1 30%, top-5 64% (300000 iterations, continuing). This is strange despite of using almost same cfg.
@WongKinYiu Thanks!
@rsek147 Can you attach your cfg-file?
@AlexeyAB @WongKinYiu I think the ghostnet.cfg.txt is wrong,it can refer to https://github.com/d-li14/ghostnet.pytorch/blob/master/ghostnet.py , i use ghiost moudle in mobilenetv3 Small , it get 20% top1 after 20000 iters with 256 batch size
@AlexeyAB hi! When I use GhostNet for training, the loss -nan,How to solve? I am completing a fruit test(class=1) using the ghostnet.cfg file above. What parts should I modify?
Thanks!
@WongKinYiu Hi, have you been retraining Ghostnet since?If so, can you share .cfg and .weights files?
I did not get good result.
paper: https://arxiv.org/abs/1911.11907v1
source: https://github.com/iamhankai/ghostnet
model: ghostnet.cfg.txt
GPU GeForce RTX 2070 - Darknet framework (GPU=1 CUDNN=1 CUDNN_HALF=1) CPU Intel Core i7 6700k - Darknet framework (OPENMP=1 AVX=1)
Comparison table: https://github.com/AlexeyAB/darknet/issues/4203#issuecomment-548955416
maybe better than mobilenetv3, efficientnet, mixnet..., etc. https://github.com/iamhankai/ghostnet/issues/1
We measure the actual inference speed on an ARM-based mobile phone using the TFLite tool, we use single-threaded mode with batch size 1: