Closed Hyunseok-Kim0 closed 1 year ago
@chbw818 could u post the entire updated code if possible it would be helpfull for many persons
Thanks! I update the code in https://github.com/chbw818/yolov8-prune-using-torch-pruning-
Hello, may I ask if I can use your code for pruning and use the official weight yolov8n.pt to run normally? However, using the best. pt trained by myself will result in an error. The error message is as follows:
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, CPU and cuda: 0! (When checking argument for argument mat in method wrapper CUDA-addmv_)
Why make such a mistake just by changing weights?
DF。 @.***
------------------ 原始邮件 ------------------ 发件人: "VainF/Torch-Pruning" @.>; 发送时间: 2024年5月28日(星期二) 晚上8:55 @.>; @.**@.>; 主题: Re: [VainF/Torch-Pruning] Pruning yolov8 failed (Issue #147)
@chbw818 could u post the entire updated code if possible it would be helpfull for many persons
Thanks! I update the code in https://github.com/chbw818/yolov8-prune-using-torch-pruning-
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Hello, may I ask if I can use your code for pruning and use the official weight yolov8n.pt to run normally? However, using the best. pt trained by myself will result in an error. The error message is as follows: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, CPU and cuda: 0! (When checking argument for argument mat in method wrapper CUDA-addmv_) Why make such a mistake just by changing weights? DF。 @. … ------------------ 原始邮件 ------------------ 发件人: "VainF/Torch-Pruning" @.>; 发送时间: 2024年5月28日(星期二) 晚上8:55 @.>; @*.**@*.>; 主题: Re: [VainF/Torch-Pruning] Pruning yolov8 failed (Issue #147) @chbw818 could u post the entire updated code if possible it would be helpfull for many persons Thanks! I update the code in https://github.com/chbw818/yolov8-prune-using-torch-pruning- — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
i think that my code will work if you correctly modify the loss.py could you please give me more information about the error?It seems that you forgot to put a tensor or weight to cuda
Thank you for your code. I used it improperly and the problem has now been resolved
DF。 @.***
------------------ 原始邮件 ------------------ 发件人: "VainF/Torch-Pruning" @.>; 发送时间: 2024年5月30日(星期四) 凌晨0:17 @.>; @.**@.>; 主题: Re: [VainF/Torch-Pruning] Pruning yolov8 failed (Issue #147)
Hello, may I ask if I can use your code for pruning and use the official weight yolov8n.pt to run normally? However, using the best. pt trained by myself will result in an error. The error message is as follows: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, CPU and cuda: 0! (When checking argument for argument mat in method wrapper CUDA-addmv_) Why make such a mistake just by changing weights? DF。 @. … ------------------ 原始邮件 ------------------ 发件人: "VainF/Torch-Pruning" @.>; 发送时间: 2024年5月28日(星期二) 晚上8:55 @.>; @.@.>; 主题: Re: [VainF/Torch-Pruning] Pruning yolov8 failed (Issue #147) @chbw818 could u post the entire updated code if possible it would be helpfull for many persons Thanks! I update the code in https://github.com/chbw818/yolov8-prune-using-torch-pruning- — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.>
i think that my code will work if you correctly modify the loss.py could you please give me more information about the error?It seems that you forgot to put a tensor or weight to cuda
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Hey Guys! I think I found the bug. When a concat module & a split module are directly connected, the index mapping system fails to compute correct
idxs
. I'm going to rewrite the concat & split tracing. Really thanks for this issue!
请问一下这个问题有修复吗?
Can someone share their before and after results of model size, performance drop, inference time?
Hey @chbw818, thank you a lot for sharing your updated yolov8 pruning code. could you tell me please what torch, ultralytics, python versions did you use. I am trying to prune yolov8 and found your updated code for that. It seems to be working for everyone, but I am encountering this error. AttributeError: Can't get attribute 'main' on <module 'builtins' (built-in)>
AttributeError Traceback (most recent call last)
hello,when i ran the code https://github.com/VainF/Torch-Pruning/blob/master/examples/yolov8/yolov8_pruning.py i met the error as follows:
Traceback (most recent call last): File "d:/ultralytics-main/prune_v8.py", line 17, in <module> from ultralytics.engine.model import TASK_MAP ImportError: cannot import name 'TASK_MAP' from 'ultralytics.engine.model' (d:\ultralytics-main\ultralytics\engine\model.py)
it seems caused by the version of the code of yolov8. but i have trained a model with the newest version of yolov8,I wonder how to solve it.The problem have been solved.And the same issue can be found in the issues. First,using this line to replace the line 250 in yolov8_pruning.py
self.trainer = self.task_map[self.task]['trainer'](overrides=overrides, _callbacks=self.callbacks)
next,fix the loss function inultralytics/ultralytics/utils/loss.py
like this: `def bbox_decode(self, anchor_points, pred_dist): """Decode predicted object bounding box coordinates from anchor points and distribution.""" if self.use_dfl: b, a, c = pred_dist.shape # batch, anchors, channels mydevice=torch.device('cuda:0') self.proj=self.proj.to(mydevice) pred_dist = pred_dist.view(b, a, 4, c // 4).softmax(3).matmul(self.proj.type(pred_dist.dtype))# pred_dist = pred_dist.view(b, a, c // 4, 4).transpose(2,3).softmax(3).matmul(self.proj.type(pred_dist.dtype)) # pred_dist = (pred_dist.view(b, a, c // 4, 4).softmax(2) * self.proj.type(pred_dist.dtype).view(1, 1, -1, 1)).sum(2) return dist2bbox(pred_dist, anchor_points, xywh=False)`
Does anyone know why the loss function has to be modified? Maybe somewhere in the yolov8_pruning script the parameter "proj" is sent back to cpu accidentally?
Hello, I am trying to apply filter pruning to yolov8 model. I saw there is sample code for yolov7 in https://github.com/VainF/Torch-Pruning/blob/master/benchmarks/prunability/yolov7_train_pruned.py. Since yolov8 has very similar structure with yolov7, I thought it would be possible to pruning it with minimal modification. However, the pruning failed due to weird problem near Concat layer. I used code below under yolov8 root to prune the model.
Following message is stack trace when pruning is failed.
the layer in error message is batchnorm layer which has (640,) shaped tensor in
layer.weight.data
. However,idxs
has (1280,) shape and out of index values. In other layers around concat it also shows similar error, which meansidxs
has much larger shape or larger value than layer weight length. I tried to figure out why this problem happens, but stuck right now. I guess there is problem in graph construction like_ConcatIndexMapping
or something for yolov8. It will be nice if you can help or give some advice to solve this problem.