Closed lyb36524 closed 1 year ago
should i comment out these too? def forward_split(self, x): y = list(self.cv1(x).split((self.c, self.c), 1)) y.extend(m(y[-1]) for m in self.m) return self.cv2(torch.cat(y, 1))
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了
我按照这个方法但手机app端一直闪退,尝试了好几天都是如此,我的步骤是: 1.修改modules.py中detect部分:class C2f和class Detect 2.yolo task=detect mode=export model=yolov8n.pt format=onnx simplify=True opset=12 (尝试过不要sim或者opset13还是不行) 3.在线转换yolov8n-sim-opt-fp16.param,https://convertmodel.com/#outputFormat=ncnn,勾选 simplifier/ncnnoptimize/fp16 4.放入assets后修改yolo.cpp中为x.extract("output0", out)。注 param文件中最后一行为: Concat /model.22/Concat_5 2 1 /model.22/Mul_2_output_0 /model.22/Sigmoid_output_0 output0 num_class 和class name均未更改 5.运行后一直闪退
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki
也求飞哥指点
模型的param文件 如附件 yolov8n.txt
我也是相同的问题,可以训练,但安卓上运行会出现很多识别,但最终崩溃
这是我转换的文件 yolov8n-sim-opt.txt
@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作
@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作
我只做检测detect,只需要修改c2f 和detect 然后转换成onnx 再通过网站转换
@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作
我只做检测detect,只需要修改c2f 和detect 然后转换成onnx 再通过网站转换
问题已经修复,自己的错误,习惯了yolov5的export方式,这个export 自己写python文件导出就好了
第二步导出onnx ,自己新建一个export.py导出就好
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
您好,想问一下我的pt文件通过wiki上的导出办法导出onnx文件之后总是没有slice结构,想问一下是因为我少了哪个步骤吗QWQ
@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作
@FeiGeChuanShu 您好,我只照着wiki上更改了module.py文件中detect和c2f的源码,跑出来的.pt文件通过wiki中的export.py方式导出.onnx文件后放在netron里查看了一下,发现没有slice结构,想问一下是我哪一步出了问题吗
您好,想问一下我的pt文件通过wiki上的导出办法导出onnx文件之后总是没有slice结构,想问一下是因为我少了哪个步骤吗QWQ
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗?
best-sim-opt-fp16.txt这是我转换出来的params,这样的格式是正确的吗?还有一个转出的最后多了一层permute(转置)操作
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗?
更改了module.py文件后需要重新训练yolov8模型,但目前我还没遇到这样的问题,如果你直接按照wiki中的方式出现了参数数量对不上的问题的话你可以参考我在https://github.com/FeiGeChuanShu/ncnn-android-yolov8/issues/11中的改法
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki
I fixed typo of convert
. 😃
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki
I fixed typo of
convert
. 😃https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model
I want to add a sound alarm to the app(when the confidence bigger than 0.5), but I don't know how and where to add the codes: music = MediaPlayer.create(MainActivity.this, R.raw.abc);music.start(); I can put mediaplayer to MainActivity.java, it'ok to run , but I don't know how to quote out "obj.prob" which was defined in yolo.cpp.
a sample below. Would you please help to figure out this, thank you. https://blog.csdn.net/Mr_LanGX/article/details/128027716
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki
I fixed typo of
convert
. 😃https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model
if we can quote out “obj.prob” in yolo.cpp in the file of mainactivity.java ?
@jamiehuang268
Please check static void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects)
function. You need to wirte a jni function to quote out obj.prob
, refer https://blog.csdn.net/Vaccae/article/details/111595943 . Good luck!
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
你好,改完c2f和detect后,需要重新训练模型吗
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗?
更改了module.py文件后需要重新训练yolov8模型,但目前我还没遇到这样的问题,如果你直接按照wiki中的方式出现了参数数量对不上的问题的话你可以参考我在https://github.com/FeiGeChuanShu/ncnn-android-yolov8/issues/11中的改法
你好,你这样改了后,模型是可以运行吗?效果有差别吗?
为什么我在yolov8上转出来的onnx模型连predict都运行不了
@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。
请教一下,是训练的时候用原版,导出的时候再改代码吗?改过的代码不能用于训练的对吗,语需要训练的时候再改回原版是这样吗?
Hi, for the latest version, C2f is moved on site-packages/ultralytics/nn/modules/block.py,
Detect is moved on /site-packages/ultralytics/nn/modules/block.py
Hi, for the latest version, C2f is moved on site-packages/ultralytics/nn/modules/block.py,
Detect is moved on /site-packages/ultralytics/nn/modules/block.py
In the latest version of YOLOv8, following the previous modification method, I will report an error. So is there a modification method for the latest version?
i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?
有时候是因为,模型训练的质量不够,试着将手机靠近或者远离,被识别的物体,不断调整角度,我现在,在问题中的更改和导出方法,是可以成功导出并且识别检测的。
最新版本的yolov8,也可以这样转换,转换的时候不需要重新训练模型,使用yolo训练模型和识别的时候,需要将程序修改回原版,否则报错。新版转换方法,大家看最上面的问题贴,下面有最新的,我已经更新了。同时我也录制了一个小视频,讲解转换过程:https://www.bilibili.com/video/BV1Uo4y1F7vG/
Hey!
I have followed the steps but it does not work for me.
Here are the steps I made to export the model (https://colab.research.google.com/drive/1rZf3f7koBCyGOfZKiOutPtl0El6w6TfR?usp=sharing )
Steps:
pip3 install ultralytics==8.0.50 onnx==1.12.0 onnxruntime==1.12.0 onnx-simplifier==0.4.8 onnxsim==0.4.13
Replace the code
class C2f(nn.Module):
# CSP Bottleneck with 2 convolutions
def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion
super().__init__()
self.c = int(c2 * e) # hidden channels
self.cv1 = Conv(c1, 2 * self.c, 1, 1)
self.cv2 = Conv((2 + n) * self.c, c2, 1) # optional act=FReLU(c2)
self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
def forward(self, x):
# y = list(self.cv1(x).chunk(2, 1))
# y.extend(m(y[-1]) for m in self.m)
# return self.cv2(torch.cat(y, 1))
print('forward C2f')
x = self.cv1(x)
x = [x, x[:, self.c:, ...]]
x.extend(m(x[-1]) for m in self.m)
x.pop(1)
return self.cv2(torch.cat(x, 1))
def forward_split(self, x):
y = list(self.cv1(x).split((self.c, self.c), 1))
y.extend(m(y[-1]) for m in self.m)
return self.cv2(torch.cat(y, 1))
class Detect(nn.Module):
dynamic = False # force grid reconstruction
export = False # export mode
shape = None
anchors = torch.empty(0) # init
strides = torch.empty(0) # init
def __init__(self, nc=80, ch=()): # detection layer
super().__init__()
self.nc = nc # number of classes
self.nl = len(ch) # number of detection layers
self.reg_max = 16 # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
self.no = nc + self.reg_max * 4 # number of outputs per anchor
self.stride = torch.zeros(self.nl) # strides computed during build
c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], self.nc) # channels
self.cv2 = nn.ModuleList(
nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)
self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()
def forward(self, x):
shape = x[0].shape # BCHW
for i in range(self.nl):
x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
if self.training:
return x
elif self.dynamic or self.shape != shape:
self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
self.shape = shape
# if self.export and self.format == 'edgetpu': # FlexSplitV ops issue
# x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
# box = x_cat[:, :self.reg_max * 4]
# cls = x_cat[:, self.reg_max * 4:]
# else:
# box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)
# dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
# y = torch.cat((dbox, cls.sigmoid()), 1)
# return y if self.export else (y, x)
print('forward Detect')
pred = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).permute(0, 2, 1)
return pred
def bias_init(self):
# Initialize Detect() biases, WARNING: requires stride availability
m = self # self.model[-1] # Detect() module
# cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
# ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum()) # nominal class frequency
for a, b, s in zip(m.cv2, m.cv3, m.stride): # from
a[-1].bias.data[:] = 1.0 # box
b[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2) # cls (.01 objects, 80 classes, 640 img)
Export code:
from ultralytics import YOLO model = YOLO("old-best.pt") success = model.export(format="onnx", opset=12, simplify=True)
Here is my output from the export code
Ultralytics YOLOv8.0.50 🚀 Python-3.10.12 torch-2.0.1+cu118 CPU Model summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPs forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward Detect
PyTorch: starting from old-best.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 8400, 65) (6.0 MB)
ONNX: starting export with onnx 1.12.0... forward Detect forward Detect forward Detect ONNX: simplifying with onnxsim 0.4.8... ============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
ONNX: export success ✅ 1.3s, saved as old-best.onnx (11.5 MB)
Export complete (2.0s)
3. use https://convertmodel.com/ with "simply", "optimize", "f16" options enabled. Here is my params file: [old-best-sim-opt-fp16.param.txt](https://github.com/FeiGeChuanShu/ncnn-android-yolov8/files/11887677/old-best-sim-opt-fp16.param.txt)
4. I've used code https://github.com/Qengineering/YoloV8-ncnn-Raspberry-Pi-4/blob/main/yoloV8.cpp to run detection
I've receiving `1655 objects found` on a picture where I have only one relevant object. What could be the issue?
I've exported already trained model. Do i need to re-train it with code changes? Or only export is fine?
btw, Ultralytics is open to adding these code changes directly to the repo https://github.com/ultralytics/ultralytics/issues/3412 Unfortunately, I'm not that good with ncnn to make a PR.
。同时我也录制了一个小视频,讲解转换过程:
第二步导出onnx ,自己新建一个export.py导出就好
您好,我有依照上面方法做模型转换,也另建个export.py将pt转成onnx但APP仍然闪退,有其他法子么?
Hello I faced to the same error that my custom ncnn model detects so many objects. How to convert yolov8 model to ncnn model correctly?
我按照您的说明进行操作,但我的模型在成功运行后没有检测到任何东西。你能引导我走向正确的方向吗?
尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了
我按照这个方法但手机app端一直闪退,尝试了好几天都是如此,我的步骤是: 1.修改 modules.py 中detect部分:class C2f和class Detect 2.yolo task=detect mode=export model=yolov8n.pt format=onnx simplify=True opset=12 (尝试过不要sim或者opset13还是不行) 3.在线转换yolov8n-sim-opt-fp16.param,https://convertmodel.com/#outputFormat=ncnn,[勾选](https://convertmodel.com/#outputFormat=ncnn%EF%BC%8C%E5%8B%BE%E9%80%89) simplifier/ncnnoptimize/fp16 4.放入assets后修改yolo.cpp中为x.extract(“output0”, out)。注 param文件中最后一行为: Concat /model.22/Concat_5 2 1 /model.22/Mul_2_output_0 /model.22/Sigmoid_output_0 output0 num_class 和class name均未更改 5.运行后一直闪退
你好 请问你有解决app闪退的问题吗
Hello I faced to the same error that my custom ncnn model detects so many objects. How to convert yolov8 model to ncnn model correctly?
Have you got the answer? Same problem here.... :(
是否有yolov8模型转换为NCNN的详细步骤?我尝试将自己的yolov8模型进行了转换,但是识别效果却不正确 我使用了如下命令进行转换: yolo task=detect mode=export model=best.pt format=onnx simplify=True opset=13 onnx2ncnn best.onnx DJIROCOs1.param DJIROCOs1.bin ncnnoptimize.exe DJIROCOs1.param DJIROCOs1.bin DJIROCOs.param DJIROCOs.bin 65536 运行过程: 另外我还在这里做了修改:
2023/08/30 自己训练的模型,有些朋友会乱框,请检查以下:
一定更改yolo.cpp中的:const int num_class = 变量,并且与,自己模型的类别数量对应,或小于。
闪退则检查,yolo.c中的输入输出:
以及加载的模型名称:
2023/05/19 最新:
yolov8经过更新后,没有了,ultralytics/ultralytics/nn/modules.py 文件。 改为了: ultralytics\nn\modules\block.py ultralytics\nn\modules\head.py
1.需要修改,ultralytics\nn\modules\block.py class C2f(nn.Module): 中的:
如图:
2.需要修改,ultralytics\nn\modules\head.py class Detect(nn.Module): 中的:
如图:
修改后,模型不需要重新训练,运行以下,导出即可,其余步骤不变,再次训练和识别的时候改回原版,否则报错: 转换ncnn可以使用在线工具:https://convertmodel.com/
这里是旧版本yolov8的修改方法,新版本在上面:
同时参考了:https://github.com/DataXujing/ncnn_android_yolov8
1.更改yolov8文件(2处):
路径C:\ProgramData\Anaconda3\envs\yolov8\Lib\site-packages\ultralytics\nn\modules.py
2.运行命令转换模型:
3.放入Android项目,目录。
4.修改Android项目中,yolo.cpp,(3处)
5.运行效果: