Open Minspeech opened 11 months ago
unwrapped_paramters [UnwrappedParameters(parameters=Parameter containing: tensor([0.5190, 0.0565, 0.3977, ..., 0.2764, 0.2888, 0.1559], requires_grad=True), pruning_dim=0)]剪枝维度是0?
hi, i got the similar error when pruning my yolov6 model: module2node[param].pruning_dim = dim KeyError: Parameter containing: tensor([0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16.])
how can i solve it?
@666888999123 @QiqLiang ,Does solve this bug? How solve it?
您好,非常感谢您在万般忙碌抽空看这封信, 我在剪枝的时候遇到一个问题,有一个参数它不能构建依赖图,报错信息如下: unwrapped_paramters [UnwrappedParameters(parameters=Parameter containing: tensor([0.5190, 0.0565, 0.3977, ..., 0.2764, 0.2888, 0.1559], requires_grad=True), pruning_dim=0)] Traceback (most recent call last): File "prune_hf_wav2vec.py", line 49, in
pruner = tp.pruner.MetaPruner(
File "/root/anaconda3/envs/hugface2/lib/python3.8/site-packages/torch_pruning/pruner/algorithms/metapruner.py", line 128, in init
self.DG = dependency.DependencyGraph().build_dependency(
File "/root/anaconda3/envs/hugface2/lib/python3.8/site-packages/torch_pruning/dependency.py", line 358, in build_dependency
self.module2node = self._trace(
File "/root/anaconda3/envs/hugface2/lib/python3.8/site-packages/torch_pruning/dependency.py", line 742, in _trace
self._trace_computational_graph(
File "/root/anaconda3/envs/hugface2/lib/python3.8/site-packages/torch_pruning/dependency.py", line 851, in _trace_computational_graph
module2node[param].pruning_dim = dim
KeyError: Parameter containing:
tensor([0.5190, 0.0565, 0.3977, ..., 0.2764, 0.2888, 0.1559],
requires_grad=True)
我尝试把这个参数加入ignored_layers列表里面,但是还是报错了还是上面的错误。请问是怎么回事? ignored_layers = [参数名(上述Parameter containing:tensor([0.5190, 0.0565, 0.3977, ..., 0.2764, 0.2888, 0.1559],requires_grad=True)的参数名)]
创建一个剪枝器对象
pruner = tp.pruner.MetaPruner( model, example_inputs=example_inputs, global_pruning=False, importance=imp, iterative_steps=1, pruning_ratio=0.5, num_heads=num_heads, prune_head_dims=False, prune_num_heads=True, #不对多头注意力剪枝 head_pruning_ratio=0, output_transform=lambda out: out.logits.sum(), ignored_layers=ignored_layers, )