Robert-JunWang / Pelee

Pelee: A Real-Time Object Detection System on Mobile Devices
Apache License 2.0
885 stars 255 forks source link

How about peleeNet training from scratch? #30

Open MrWhiteHomeman opened 6 years ago

MrWhiteHomeman commented 6 years ago

I think peleenet is similar to DSOD, how about peleeNet training from scratch, does it work??? I appreciate it if you can give me some advices!!! Thank u!!!

ujsyehao commented 6 years ago

@MrWhiteHomeman I am trying to do it, response to you later

ujsyehao commented 6 years ago

Here is a link 链接: https://pan.baidu.com/s/1vZONIe2pBkxjo-s5wP3zAg 密码: 3ip6.

MrWhiteHomeman commented 6 years ago

@ujsyehao It is so nice of you for your reply !!! I have another question about the code, in the feature_extractor.py ,the 30th line, why are there two 'stage4_tb/ext/pm2/res/relu' in the Pelee.mbox_source_layers? Can you give me some advices? Thank you!!!

ujsyehao commented 6 years ago

You can use (http://ethereon.github.io/netscope/#/editor) to view peleenet-ssd network structure, you will find stage4_tb/ext/pm2/res layer is used twice to generate ext/pm1_mbox_loc layer and ext/pm2_mbox_loc layer(conf layer/priorbox layer is also the case). The reason is that peleenet drops 38x38 feature map(you can view pelee paper) and just use the remaining 5 feature extracted layer(19x19, 10x10, 5x5, 3x3, 1x1), but SSD merges 6 layers' prior boxes, so author use 19x19 feature map(also known as stage4_tb/ext/pm2/res) twice to predict two conf/loc/priorbox layers.

ujsyehao commented 6 years ago

mobile-net ssd also follows this design pattern, I will update later.

MrWhiteHomeman commented 6 years ago

@ujsyehao Hello,I have a question about the batchsize, in this paper , the batch-size is 32, if I change the batch-size to 64, will I get a better result about testing ? I would appreciate it if you could give me some advices. Thank you!!!

ujsyehao commented 6 years ago

No, batch size can affect training time and has no direct relation with model performance.

MrWhiteHomeman commented 6 years ago

@ujsyehao So,I always have a question about batchsize, if the batch size is too big, will it have a bad result? And I know the DetNet(旷世科技) , it set the batch size to 256, and get a greatest result...

ujsyehao commented 6 years ago

You can look it

RainFrost1 commented 6 years ago

Could you please share the prototxt again? The link [链接: https://pan.baidu.com/s/1vZONIe2pBkxjo-s5wP3zAg 密码: 3ip6.] failed now. Thank you very much~~~ @MrWhiteHomeman @ujsyehao

foralliance commented 5 years ago

@ujsyehao 所谓的batch size大小不会影响model performance,应该仅限于模型中的BN层参数固定的情况吧. 如果BN层的参数在训练过程中也进行微调,那么batch size大小还是会影响model performance的吧.

ujsyehao commented 5 years ago

@foralliance model performance 依赖于模型本身,batch size只是一个超参而已,如果你修改了batch size,再选定其它合适的超参比如base_lr,它最终的效果是一样的,一般而言,使用一个大batch只是训练的更快,更容易出结果,并不会从根本上决定模型的性能

foralliance commented 5 years ago

@ujsyehao HI 这是之前和一位作者关于batch size的讨论

ujsyehao commented 5 years ago

@foralliance 我看了这个回答,默认accum_batch_size固定,对于无BN层/BN层参数固定情况下,batch size不影响模型性能这个观点我是认同的

EvaneSnow commented 4 years ago

各位大佬好,你们谁晓得Pelee训练目标检测 + 车道线,谢谢各位