tanluren / yolov3-channel-and-layer-pruning

yolov3 yolov4 channel and layer pruning, Knowledge Distillation 层剪枝,通道剪枝,知识蒸馏
Apache License 2.0
1.5k stars 446 forks source link

ValueError: need at least one array to concatenate #78

Open qaazii opened 4 years ago

qaazii commented 4 years ago

I am having this error since two days. i tried everything but dont know how to solve it.please kindly reply

labels4 = np.concatenate(labels4, 0) File "<__array_function__ internals>", line 6, in concatenate

zbyuan commented 4 years ago

Samples are available?numpy?

qaazii commented 4 years ago

Samples are available?numpy?

Samples are available. but there are some labels that are empty because I use only 2 classes in the Pascal VOC dataset. ] also, i just put mosiac as false because this error occurs in mosiac file which collects 4 images for training.

zbyuan commented 4 years ago

If there are empty labels,https://github.com/tanluren/yolov3-channel-and-layer-pruning/blob/master/utils/datasets.py#L591 Uncomment

qaazii commented 4 years ago

Thank you so much for helping.

I have another problem too!! when i finish training in evaluation step i the mAP is zero because i don't have .json file for VOC evaluation set. How can i evaluate my evaluation set? Do you have any idea.

zbyuan commented 4 years ago

https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data https://blog.csdn.net/qq_34795071/article/details/90769094

qaazii commented 4 years ago

https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data https://blog.csdn.net/qq_34795071/article/details/90769094

Thank you so much. i am using both of these tutorials but I don't know I have errors in evaluation. maybe because in evaluation it class the coco instances2014.json.

qaazii commented 4 years ago

https://github.com/ultralytics/yolov3/wiki/Train-Custom-Data https://blog.csdn.net/qq_34795071/article/details/90769094

I have the following error in the evaluation of the validation set. Note I am using thevoc dataset. Starting training for 10 epochs...

 Epoch   gpu_mem      GIoU       obj       cls     total      soft    rratio   targets  img_size

0%| | 0/1355 [00:00<?, ?it/s]learning rate: 1e-06 0/9 7.65G 2.31 1.78 44.8 48.9 0 0 3 416: 100%|█████████████████████████████████████████████████████████████████████████████████████████████| 1355/1355 [15:19<00:00, 1.47it/s] Reading image shapes: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5417/5417 [00:20<00:00, 265.06it/s] Class Images Targets P R mAP F1: 0%| | 0/339 [00:00<?, ?it/s]Traceback (most recent call last): File "", line 1, in Class Images Targets P R mAP F1: 0%| | 0/339 [00:11<?, ?it/s] File "C:\Users\power703\Anaconda3\lib\multiprocessing\spawn.py", line 105, in spawn_main Traceback (most recent call last): File "train.py", line 542, in exitcode = _main(fd) File "C:\Users\power703\Anaconda3\lib\multiprocessing\spawn.py", line 114, in _main train() # train normally prepare(preparation_data) File "C:\Users\power703\Anaconda3\lib\multiprocessing\spawn.py", line 225, in prepare

File "train.py", line 418, in train _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\power703\Anaconda3\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path save_json=final_epoch and epoch > 0 and 'fruit.data' in data) File "C:\Users\power703\Desktop\incdet\yolov3-channel-and-layer-pruning\test.py", line 64, in test run_name="mp_main") File "C:\Users\power703\Anaconda3\lib\runpy.py", line 263, in run_path for batch_i, (imgs, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)): File "C:\Users\power703\Anaconda3\lib\site-packages\tqdm\std.py", line 1129, in iter__ pkg_name=pkg_name, script_name=fname) File "C:\Users\power703\Anaconda3\lib\runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "C:\Users\power703\Anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\power703\Desktop\incdet\yolov3-channel-and-layer-pruning\train.py", line 3, in import torch.distributed as dist File "C:\Users\power703\Anaconda3\lib\site-packages\torch__init__.py", line 79, in from torch._C import * ImportError: DLL load failed: The paging file is too small for this operation to complete. for obj in iterable: File "C:\Users\power703\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 193, in iter__ return _DataLoaderIter(self) File "C:\Users\power703\Anaconda3\lib\site-packages\torch\utils\data\dataloader.py", line 469, in init w.start() File "C:\Users\power703\Anaconda3\lib\multiprocessing\process.py", line 105, in start self._popen = self._Popen(self) File "C:\Users\power703\Anaconda3\lib\multiprocessing\context.py", line 223, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\power703\Anaconda3\lib\multiprocessing\context.py", line 322, in _Popen return Popen(process_obj) File "C:\Users\power703\Anaconda3\lib\multiprocessing\popen_spawn_win32.py", line 65, in init reduction.dump(process_obj, to_child) File "C:\Users\power703\Anaconda3\lib\multiprocessing\reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) BrokenPipeError: [Errno 32] Broken pipe

tanluren commented 4 years ago

估计环境问题,python numpy torch的版本不匹配,建议换个版本试试

tanluren commented 4 years ago

还有想问下你的显卡显存多大,设的batchsize多少

qaazii commented 4 years ago

I also want to ask how big your graphics card is and how much batch size is set.

My graphics card is GPU is 1080 and batch size is i tried 16 and 4 and 2 and 8

qaazii commented 4 years ago

It is estimated that the environment problem, the version of python numpy torch does not match, it is recommended to try another version

Which version od python nmpy and torch should i use. I tried the requirement.txt file which was having some error and i installed ..

tanluren commented 4 years ago

Python 3.6.7 absl-py 0.8.0 albumentations 0.3.3 apex 0.1 asn1crypto 1.2.0 astor 0.8.0 backcall 0.1.0 certifi 2019.9.11 cffi 1.13.2 chardet 3.0.4 colorama 0.4.1 cryptography 2.7 cycler 0.10.0 Cython 0.29.14 decorator 4.4.1 future 0.18.1 gast 0.2.2 gevent 1.4.0 google-pasta 0.1.7 greenlet 0.4.15 grpcio 1.24.1 h5py 2.10.0 idna 2.8 imageio 2.6.0 imgaug 0.2.6 ipykernel 5.1.3 ipython 7.9.0 ipython-genutils 0.2.0 jedi 0.15.1 jupyter-client 5.3.3 jupyter-core 4.5.0 Keras 2.3.1 Keras-Applications 1.0.8 Keras-Preprocessing 1.1.0 keras-resnet 0.1.0 keras-retinanet 0.5.1 kiwisolver 1.1.0 labelImg 1.8.3 lxml 4.5.0 Markdown 3.1.1 matplotlib 3.1.1 networkx 2.3 numpy 1.17.3 olefile 0.46 opencv-python-headless 4.1.1.26 opt-einsum 3.1.0 pandas 1.0.1 parso 0.5.1 pickleshare 0.7.5 Pillow 6.2.1 pip 19.2.3 progressbar2 3.47.0 prompt-toolkit 2.0.10 protobuf 3.10.0 pycocotools 2.0 pycparser 2.19 Pygments 2.4.2 pyOpenSSL 19.0.0 pyparsing 2.4.5 PyQt5 5.14.1 PyQt5-sip 12.7.1 PySocks 1.7.1 pytesseract 0.3.2 python-dateutil 2.8.1 python-utils 2.3.0 pytz 2019.3 PyWavelets 1.0.3 pywin32 225 PyYAML 5.1.2 pyzmq 18.1.0 requests 2.22.0 scikit-image 0.15.0 scipy 1.3.2 setuptools 41.2.0 six 1.12.0 tensorboard 2.0.0 tensorboardX 1.9 tensorflow 2.0.0 tensorflow-estimator 2.0.0 termcolor 1.1.0 terminaltables 3.1.0 torch 1.2.0 torchfile 0.1.0 torchvision 0.4.0 tornado 6.0.3 tqdm 4.36.1 traitlets 4.3.3 urllib3 1.25.6 visdom 0.1.8.9 wcwidth 0.1.7 webcolors 1.11.1 websocket-client 0.56.0 Werkzeug 0.16.0 wheel 0.33.6 win-inet-pton 1.1.0 wincertstore 0.2 wrapt 1.11.1

this is mine, you can select and install the pkg you need

qaazii commented 4 years ago

Thank you so much.. Basically i want to use the distillation knowledge in yolov3. Can you point out the code that is only used for the distillation network..

tanluren commented 4 years ago

https://github.com/tanluren/yolov3-channel-and-layer-pruning/blob/master/train.py#L366

sbajh123 commented 2 years ago

What does your data set path look like