PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.62k stars 2.87k forks source link

python tools/train.py -c configs/picodet/picodet_s_320_voc.yml --eval 训练自己的数据很慢是怎么回事? #4918

Open LMR2018 opened 2 years ago

LMR2018 commented 2 years ago

python tools/train.py -c configs/picodet/picodet_s_320_voc.yml --eval 训练自己的数据很慢是怎么回事?

(base) F:\murong\projects1\PaddleDetection-release-2.3>python tools/train.py -c configs/picodet/picodet_s_320_voc.yml --eval C:\Users\Dev16\Anaconda3\lib\site-packages\socks.py:58: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working from collections import Callable C:\Users\Dev16\Anaconda3\lib\site-packages\win32\lib\pywintypes.py:2: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp, sys, os W1216 16:59:38.004186 9828 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 6.1, Driver API Version: 11.1, Runtime API Version: 10.2 W1216 16:59:38.016156 9828 device_context.cc:465] device: 0, cuDNN Version: 7.6. [12/16 16:59:39] ppdet.utils.checkpoint INFO: ['_fc.bias', '_fc.weight', '_last_conv._batch_norm._mean', '_last_conv._batch_norm._variance', '_last_conv._batch_norm.bias', '_last_conv._batch_norm.weight', '_last_conv._conv.weight', 'la st_conv.weight'] in pretrained weight is not used in the model, and its will not be loaded [12/16 16:59:40] ppdet.utils.checkpoint INFO: Finish loading model weights: model/ESNet_x0_75_pretrained.pdparams [12/16 16:59:41] ppdet.engine INFO: Epoch: [0] [ 0/104] learning_rate: 0.040000 loss_vfl: 1.551724 loss_bbox: 0.895358 loss_dfl: 0.521692 loss: 2.968774 eta: 11:32:52 batch_cost: 1.3324 data_cost: 0.6602 ips: 12.0081 images/s [12/16 17:00:28] ppdet.engine INFO: Epoch: [0] [ 20/104] learning_rate: 0.064000 loss_vfl: 1.410168 loss_bbox: 0.933549 loss_dfl: 0.481081 loss: 2.824308 eta: 18:19:51 batch_cost: 2.1557 data_cost: 1.5162 ips: 7.4222 images/s [12/16 17:01:11] ppdet.engine INFO: Epoch: [0] [ 40/104] learning_rate: 0.088000 loss_vfl: 1.249832 loss_bbox: 0.865242 loss_dfl: 0.428205 loss: 2.542513 eta: 17:50:12 batch_cost: 2.0022 data_cost: 1.3908 ips: 7.9912 images/s [12/16 17:01:54] ppdet.engine INFO: Epoch: [0] [ 60/104] learning_rate: 0.112000 loss_vfl: 1.094283 loss_bbox: 0.773494 loss_dfl: 0.380751 loss: 2.250960 eta: 17:33:03 batch_cost: 1.9640 data_cost: 1.4225 ips: 8.1466 images/s [12/16 17:02:36] ppdet.engine INFO: Epoch: [0] [ 80/104] learning_rate: 0.136000 loss_vfl: 1.112156 loss_bbox: 0.703290 loss_dfl: 0.370269 loss: 2.181904 eta: 17:19:40 batch_cost: 1.9298 data_cost: 1.3763 ips: 8.2908 images/s [12/16 17:03:18] ppdet.engine INFO: Epoch: [0] [100/104] learning_rate: 0.160000 loss_vfl: 1.186168 loss_bbox: 0.693515 loss_dfl: 0.354381 loss: 2.209870 eta: 17:15:16 batch_cost: 1.9681 data_cost: 1.4238 ips: 8.1295 images/s [12/16 17:03:26] ppdet.engine INFO: Epoch: [1] [ 0/104] learning_rate: 0.164800 loss_vfl: 1.168976 loss_bbox: 0.687513 loss_dfl: 0.346974 loss: 2.186113 eta: 17:10:27 batch_cost: 1.9444 data_cost: 1.4134 ips: 8.2290 images/s [12/16 17:04:07] ppdet.engine INFO: Epoch: [1] [ 20/104] learning_rate: 0.188800 loss_vfl: 1.119198 loss_bbox: 0.709000 loss_dfl: 0.352092 loss: 2.209464 eta: 17:02:26 batch_cost: 1.8994 data_cost: 1.3629 ips: 8.4236 images/s [12/16 17:04:50] ppdet.engine INFO: Epoch: [1] [ 40/104] learning_rate: 0.212800 loss_vfl: 1.064116 loss_bbox: 0.663903 loss_dfl: 0.350917 loss: 2.102975 eta: 17:01:04 batch_cost: 1.9642 data_cost: 1.4246 ips: 8.1458 images/s [12/16 17:05:32] ppdet.engine INFO: Epoch: [1] [ 60/104] learning_rate: 0.236800 loss_vfl: 1.081842 loss_bbox: 0.631488 loss_dfl: 0.334639 loss: 2.041198 eta: 17:00:19 batch_cost: 1.9714 data_cost: 1.4469 ips: 8.1161 images/s [12/16 17:06:13] ppdet.engine INFO: Epoch: [1] [ 80/104] learning_rate: 0.260800 loss_vfl: 1.123333 loss_bbox: 0.597034 loss_dfl: 0.321654 loss: 2.061877 eta: 16:56:26 batch_cost: 1.9147 data_cost: 1.4007 ips: 8.3563 images/s [12/16 17:06:52] ppdet.engine INFO: Epoch: [1] [100/104] learning_rate: 0.284800 loss_vfl: 1.187028 loss_bbox: 0.580264 loss_dfl: 0.309837 loss: 2.081032 eta: 16:47:04 batch_cost: 1.7933 data_cost: 1.2734 ips: 8.9221 images/s [12/16 17:07:01] ppdet.engine INFO: Epoch: [2] [ 0/104] learning_rate: 0.289600 loss_vfl: 1.187028 loss_bbox: 0.571176 loss_dfl: 0.308245 loss: 2.081032 eta: 16:45:54 batch_cost: 1.7970 data_cost: 1.2709 ips: 8.9038 images/s [12/16 17:07:39] ppdet.engine INFO: Epoch: [2] [ 20/104] learning_rate: 0.313600 loss_vfl: 1.152678 loss_bbox: 0.542991 loss_dfl: 0.294221 loss: 1.972140 eta: 16:36:30 batch_cost: 1.7533 data_cost: 1.2296 ips: 9.1256 images/s [12/16 17:08:24] ppdet.engine INFO: Epoch: [2] [ 40/104] learning_rate: 0.337600 loss_vfl: 1.230537 loss_bbox: 0.464604 loss_dfl: 0.290910 loss: 1.982177 eta: 16:42:07 batch_cost: 2.0815 data_cost: 1.5124 ips: 7.6868 images/s

LMR2018 commented 2 years ago

QQ截图20211216174738 QQ截图20211216174945

LMR2018 commented 2 years ago

QQ截图20211216174816

yghstill commented 2 years ago

@LMR2018

  1. 看下电脑内存使用情况是不是占满了,数据加载与预处理是性能瓶颈,如果想加快训练速度,可以将worker_num继续调大。看下你的图片本身是不是比较大,可以本地先resize到比较小的尺寸可以加快训练。
  2. 单卡训练一定要把学习率调低,按照线性变换规则。
  3. 训自己的数据,最好加载coco上训好的预训练模型,可以加快收敛速度,可以训更少的epoch数。https://github.com/PaddlePaddle/PaddleDetection/tree/release/2.3/configs/picodet#benchmark
LMR2018 commented 2 years ago

@yghstill 1、电脑内存没占满,worker_num我也调大到12了,图片和标签本地我也缩放为320*320了 2、单卡的学习率我也调低为0.01了 3、训练自己的模型我也用coco上训好的预训练模型了,python tools/train.py -c configs/picodet/picodet_s_320_coco.yml --eval

我做完这些后速度提升了一些,但我还是感觉不够快,还能做什么可以提升训练速度吗?在window上2080的显卡要训练4个小时,我数据才1800张图片,但picodet_s_320_coco.yml配置中加了TrainReader:

sample_transforms: