microsoft / nni

An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
https://nni.readthedocs.io
MIT License
13.99k stars 1.81k forks source link

model speed error #4159

Closed jxncyym closed 3 years ago

jxncyym commented 3 years ago

Describe the issue: when I compress the model: https://github.com/chenjun2hao/DDRNet.pytorch I meet the error:

use L1FilterPruner error in these codes: pruner._unwrap_model() m_speedup = ModelSpeedup(model, dummy_input, mask_path, device) m_speedup.speedup_model()

error info: exception: no description

[2021-09-06 19:24:34] INFO (nni.compression.pytorch.speedup.compressor/MainThread) start to speed up the model start to speed up the model /algdata02/yiming.yu/DDRNet.pytorch_pruner/envp_3/lib/python3.6/site-packages/torch/nn/functional.py:3063: UserWarning: Default upsampling behavior when mode=bilinear is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)) /algdata02/yiming.yu/DDRNet.pytorch_pruner/envp_3/lib/python3.6/site-packages/torch/jit/_trace.py:940: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for list, use a tuple instead. for dict, use a NamedTuple instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior. _forceoutplace, [2021-09-06 19:24:36] INFO (FixMaskConflict/MainThread){'conv1.0': 1, 'conv1.3': 1, 'layer1.0.conv1': 1, 'layer1.0.conv2': 1, 'layer1.1.conv1': 1, 'layer1.1.conv2': 1, 'layer2.0.conv1': 1, 'layer2.0.conv2': 1, 'layer2.0.downsample.0': 1, 'layer2.1.conv1': 1, 'layer2.1.conv2': 1, 'layer3.0.conv1': 1, 'layer3.0.conv2': 1, 'layer3.0.downsample.0': 1, 'layer3.1.conv1': 1, 'layer3.1.conv2': 1, 'layer3.0.conv1': 1, 'layer3.0.conv2': 1, 'layer3.1.conv1': 1, 'layer3.1.conv2': 1, 'down3.0': 1, 'compression3.0': 1, 'layer4.0.conv1': 1, 'layer4.0.conv2': 1, 'layer4.0.downsample.0': 1, 'layer4.1.conv1': 1, 'layer4.1.conv2': 1, 'layer4.0.conv1': 1, 'layer4.0.conv2': 1, 'layer4.1.conv1': 1, 'layer4.1.conv2': 1, 'down4.0': 1, 'down4.3': 1, 'compression4.0': 1, 'layer5.0.conv1': 1, 'layer5.0.conv2': 1, 'layer5.0.conv3': 1, 'layer5_.0.downsample.0': 1, 'layer5.0.conv1': 1, 'layer5.0.conv2': 1, 'layer5.0.conv3': 1, 'layer5.0.downsample.0': 1, 'spp.scale0.2': 1, 'spp.scale1.3': 1, 'spp.process1.2': 1, 'spp.scale2.3': 1, 'spp.process2.2': 1, 'spp.scale3.3': 1, 'spp.process3.2': 1, 'spp.scale4.3': 1, 'spp.process4.2': 1, 'spp.compression.2': 1, 'spp.shortcut.2': 1, 'final_layer.conv1': 1, 'final_layer.conv2': 1, 'seghead_extra.conv1': 1, 'segheadextra.conv2': 1} {'conv1.0': 1, 'conv1.3': 1, 'layer1.0.conv1': 1, 'layer1.0.conv2': 1, 'layer1.1.conv1': 1, 'layer1.1.conv2': 1, 'layer2.0.conv1': 1, 'layer2.0.conv2': 1, 'layer2.0.downsample.0': 1, 'layer2.1.conv1': 1, 'layer2.1.conv2': 1, 'layer3.0.conv1': 1, 'layer3.0.conv2': 1, 'layer3.0.downsample.0': 1, 'layer3.1.conv1': 1, 'layer3.1.conv2': 1, 'layer3.0.conv1': 1, 'layer3.0.conv2': 1, 'layer3.1.conv1': 1, 'layer3.1.conv2': 1, 'down3.0': 1, 'compression3.0': 1, 'layer4.0.conv1': 1, 'layer4.0.conv2': 1, 'layer4.0.downsample.0': 1, 'layer4.1.conv1': 1, 'layer4.1.conv2': 1, 'layer4.0.conv1': 1, 'layer4.0.conv2': 1, 'layer4.1.conv1': 1, 'layer4.1.conv2': 1, 'down4.0': 1, 'down4.3': 1, 'compression4.0': 1, 'layer5.0.conv1': 1, 'layer5.0.conv2': 1, 'layer5.0.conv3': 1, 'layer5_.0.downsample.0': 1, 'layer5.0.conv1': 1, 'layer5.0.conv2': 1, 'layer5.0.conv3': 1, 'layer5.0.downsample.0': 1, 'spp.scale0.2': 1, 'spp.scale1.3': 1, 'spp.process1.2': 1, 'spp.scale2.3': 1, 'spp.process2.2': 1, 'spp.scale3.3': 1, 'spp.process3.2': 1, 'spp.scale4.3': 1, 'spp.process4.2': 1, 'spp.compression.2': 1, 'spp.shortcut.2': 1, 'final_layer.conv1': 1, 'final_layer.conv2': 1, 'seghead_extra.conv1': 1, 'seghead_extra.conv2': 1} /algdata02/yiming.yu/DDRNet.pytorch_pruner/envp_3/lib/python3.6/site-packages/nni/compression/pytorch/utils/mask_conflict.py:120: UserWarning: This overload of nonzero is deprecated: nonzero() Consider using one of the following signatures instead: nonzero(*, bool as_tuple) (Triggered internally at /pytorch/torch/csrc/utils/python_arg_parser.cpp:882.) all_ones = (w_mask.flatten(1).sum(-1) == count).nonzero().squeeze(1).tolist() [2021-09-06 19:24:36] INFO (FixMaskConflict/MainThread) dim0 sparsity: 0.499858 dim0 sparsity: 0.499858 [2021-09-06 19:24:36] INFO (FixMaskConflict/MainThread) dim1 sparsity: 0.000000 dim1 sparsity: 0.000000 [2021-09-06 19:24:36] INFO (FixMaskConflict/MainThread) Dectected conv prune dim" 0 Dectected conv prune dim" 0

Environment:

Configuration:

Log message:

How to reproduce it?:

YukSing12 commented 2 years ago

I met the same problem when I pruned the ddrnet. Could you share your solution if you solved this problem?

HappyPeanuts commented 2 years ago

hi, i meet the same problem.Have you resolved this problem? And how to resolve it ?