FeiGeChuanShu / ncnn-android-yolov8

Real time yolov8 Android demo by ncnn
381 stars 76 forks source link

是否有yolov8模型转换为NCNN的详细步骤?我尝试将自己的yolov8模型进行了转换,但是识别效果却不正确 #8

Closed lyb36524 closed 1 year ago

lyb36524 commented 1 year ago

是否有yolov8模型转换为NCNN的详细步骤?我尝试将自己的yolov8模型进行了转换,但是识别效果却不正确 微信图片_20230205200252 我使用了如下命令进行转换: yolo task=detect mode=export model=best.pt format=onnx simplify=True opset=13 onnx2ncnn best.onnx DJIROCOs1.param DJIROCOs1.bin ncnnoptimize.exe DJIROCOs1.param DJIROCOs1.bin DJIROCOs.param DJIROCOs.bin 65536 运行过程: 屏幕截图 2023-02-05 200815 另外我还在这里做了修改: 屏幕截图 2023-02-05 202009

2023/08/30 自己训练的模型,有些朋友会乱框,请检查以下:

一定更改yolo.cpp中的:const int num_class = 变量,并且与,自己模型的类别数量对应,或小于。 image

闪退则检查,yolo.c中的输入输出:

image 以及加载的模型名称: image

2023/05/19 最新:

yolov8经过更新后,没有了,ultralytics/ultralytics/nn/modules.py 文件。 改为了: ultralytics\nn\modules\block.py ultralytics\nn\modules\head.py

1.需要修改,ultralytics\nn\modules\block.py class C2f(nn.Module): 中的:

class C2f(nn.Module):
    """CSP Bottleneck with 2 convolutions."""

    def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        self.c = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, 2 * self.c, 1, 1)
        self.cv2 = Conv((2 + n) * self.c, c2, 1)  # optional act=FReLU(c2)
        self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))

    def forward(self, x):
        """Forward pass of a YOLOv5 CSPDarknet backbone layer."""
        # y = list(self.cv1(x).chunk(2, 1))
        # y.extend(m(y[-1]) for m in self.m)
        # return self.cv2(torch.cat(y, 1))

        x = self.cv1(x)
        x = [x, x[:, self.c:, ...]]
        x.extend(m(x[-1]) for m in self.m)
        x.pop(1)
        return self.cv2(torch.cat(x, 1))

如图: image

2.需要修改,ultralytics\nn\modules\head.py class Detect(nn.Module): 中的:

    def forward(self, x):
        """Concatenates and returns predicted bounding boxes and class probabilities."""
        shape = x[0].shape  # BCHW
        for i in range(self.nl):
            x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
        if self.training:
            return x
        elif self.dynamic or self.shape != shape:
            self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
            self.shape = shape

        # x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
        # if self.export and self.format in ('saved_model', 'pb', 'tflite', 'edgetpu', 'tfjs'):  # avoid TF FlexSplitV ops
        #     box = x_cat[:, :self.reg_max * 4]
        #     cls = x_cat[:, self.reg_max * 4:]
        # else:
        #     box, cls = x_cat.split((self.reg_max * 4, self.nc), 1)
        # dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
        # y = torch.cat((dbox, cls.sigmoid()), 1)
        # return y if self.export else (y, x)

        pred = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).permute(0, 2, 1)
        return pred

如图: image

修改后,模型不需要重新训练,运行以下,导出即可,其余步骤不变,再次训练和识别的时候改回原版,否则报错: 转换ncnn可以使用在线工具:https://convertmodel.com/

from ultralytics import YOLO

# Load a model
model = YOLO('best.pt')  # load an official model
# model = YOLO('path/to/best.pt')  # load a custom trained

# Use the model
success = model.export(format="onnx",imgsz=320,half = True,optimize =True)

这里是旧版本yolov8的修改方法,新版本在上面:

同时参考了:https://github.com/DataXujing/ncnn_android_yolov8

1.更改yolov8文件(2处):

路径C:\ProgramData\Anaconda3\envs\yolov8\Lib\site-packages\ultralytics\nn\modules.py 屏幕截图 2023-02-05 231832 屏幕截图 2023-02-05 231906

2.运行命令转换模型:

yolo task=detect mode=export model=best.pt format=onnx simplify=True opset=13
onnx2ncnn best.onnx DJIROCOs1.param DJIROCOs1.bin
ncnnoptimize.exe DJIROCOs1.param DJIROCOs1.bin DJIROCOs.param DJIROCOs.bin 65536

3.放入Android项目,目录。

屏幕截图 2023-02-05 232959

4.修改Android项目中,yolo.cpp,(3处)

屏幕截图 2023-02-05 232709

屏幕截图 2023-02-05 232814

屏幕截图 2023-02-06 014555

5.运行效果:

微信图片_20230205232338

kostum123 commented 1 year ago

should i comment out these too? def forward_split(self, x): y = list(self.cv1(x).split((self.c, self.c), 1)) y.extend(m(y[-1]) for m in self.m) return self.cv2(torch.cat(y, 1))

kostum123 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

lyb36524 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了

FeiGeChuanShu commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki

jamiehuang268 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了

我按照这个方法但手机app端一直闪退,尝试了好几天都是如此,我的步骤是: 1.修改modules.py中detect部分:class C2f和class Detect 2.yolo task=detect mode=export model=yolov8n.pt format=onnx simplify=True opset=12 (尝试过不要sim或者opset13还是不行) 3.在线转换yolov8n-sim-opt-fp16.param,https://convertmodel.com/#outputFormat=ncnn,勾选 simplifier/ncnnoptimize/fp16 4.放入assets后修改yolo.cpp中为x.extract("output0", out)。注 param文件中最后一行为: Concat /model.22/Concat_5 2 1 /model.22/Mul_2_output_0 /model.22/Sigmoid_output_0 output0 num_class 和class name均未更改 5.运行后一直闪退 image

jamiehuang268 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki

也求飞哥指点

jamiehuang268 commented 1 year ago

模型的param文件 如附件 yolov8n.txt

chenzx2 commented 1 year ago

我也是相同的问题,可以训练,但安卓上运行会出现很多识别,但最终崩溃 Screenshot_20230317_155729_ B@9d60df9

chenzx2 commented 1 year ago

这是我转换的文件 yolov8n-sim-opt.txt

FeiGeChuanShu commented 1 year ago

@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作

chenzx2 commented 1 year ago

@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作

我只做检测detect,只需要修改c2f 和detect 1679041709797_23760B26-CC86-4b8e-A93D-BB55648C1BBA 1679041925852_E34D9E1A-D2E2-47b5-B5E9-CF68575D5DFE 然后转换成onnx 1679041964617_226FE707-1DC4-4db0-B168-3A6B8E8764FF 再通过网站转换 1679042132371_25DBDF0B-1736-4f23-B011-E317F3A3622E

chenzx2 commented 1 year ago

@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作

我只做检测detect,只需要修改c2f 和detect 1679041709797_23760B26-CC86-4b8e-A93D-BB55648C1BBA 1679041925852_E34D9E1A-D2E2-47b5-B5E9-CF68575D5DFE 然后转换成onnx 1679041964617_226FE707-1DC4-4db0-B168-3A6B8E8764FF 再通过网站转换 1679042132371_25DBDF0B-1736-4f23-B011-E317F3A3622E

问题已经修复,自己的错误,习惯了yolov5的export方式,这个export 自己写python文件导出就好了 1679045407239_9D9CECA7-3462-4064-9379-DFE81CBCCA3F

chenzx2 commented 1 year ago

第二步导出onnx ,自己新建一个export.py导出就好 1679045407239_9D9CECA7-3462-4064-9379-DFE81CBCCA3F

jamiehuang268 commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

reveLATONG commented 1 year ago

您好,想问一下我的pt文件通过wiki上的导出办法导出onnx文件之后总是没有slice结构,想问一下是因为我少了哪个步骤吗QWQ

reveLATONG commented 1 year ago

@jamiehuang268 @chenzx2 你俩确定是按照给的wiki里的教程弄的吗?看你们的param明显是没按照那个操作

@FeiGeChuanShu 您好,我只照着wiki上更改了module.py文件中detect和c2f的源码,跑出来的.pt文件通过wiki中的export.py方式导出.onnx文件后放在netron里查看了一下,发现没有slice结构,想问一下是我哪一步出了问题吗

Digital2Slave commented 1 year ago

您好,想问一下我的pt文件通过wiki上的导出办法导出onnx文件之后总是没有slice结构,想问一下是因为我少了哪个步骤吗QWQ

  1. For detect model

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model#312-detect-model

  1. For segment model

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model#311-segment-model

daohaofunk commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗? 图片

daohaofunk commented 1 year ago

best-sim-opt-fp16.txt这是我转换出来的params,这样的格式是正确的吗?还有一个转出的最后多了一层permute(转置)操作

reveLATONG commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗? 图片

更改了module.py文件后需要重新训练yolov8模型,但目前我还没遇到这样的问题,如果你直接按照wiki中的方式出现了参数数量对不上的问题的话你可以参考我在https://github.com/FeiGeChuanShu/ncnn-android-yolov8/issues/11中的改法

Digital2Slave commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki

I fixed typo of convert. 😃

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model

jamiehuang268 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki

I fixed typo of convert. 😃

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model

I want to add a sound alarm to the app(when the confidence bigger than 0.5), but I don't know how and where to add the codes: music = MediaPlayer.create(MainActivity.this, R.raw.abc);music.start(); I can put mediaplayer to MainActivity.java, it'ok to run , but I don't know how to quote out "obj.prob" which was defined in yolo.cpp.

a sample below. Would you please help to figure out this, thank you. https://blog.csdn.net/Mr_LanGX/article/details/128027716

jamiehuang268 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Conver-yolov8-model-to-ncnn-model follow this wiki

I fixed typo of convert. 😃

https://github.com/Digital2Slave/ncnn-android-yolov8-seg/wiki/Convert-yolov8-model-to-ncnn-model

if we can quote out “obj.prob” in yolo.cpp in the file of mainactivity.java ?

Digital2Slave commented 1 year ago

@jamiehuang268 Please check static void draw_objects(const cv::Mat& bgr, const std::vector<Object>& objects) function. You need to wirte a jni function to quote out obj.prob, refer https://blog.csdn.net/Vaccae/article/details/111595943 . Good luck!

xsy0520 commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

你好,改完c2f和detect后,需要重新训练模型吗

xsy0520 commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

你好,我完全依照你的步骤来了一遍(解决了安卓报错),但是模型测试还是无法正常检测。请问是否在更改了module.py文件后需要重新训练yolov8的pt模型呢?另外,这样更改module后导出onnx模型,按照提示的方法无法验证onnx模型了,之前没改之前是可以正确验证onnx模型的,请问你遇到这样的问题了吗? 图片

更改了module.py文件后需要重新训练yolov8模型,但目前我还没遇到这样的问题,如果你直接按照wiki中的方式出现了参数数量对不上的问题的话你可以参考我在https://github.com/FeiGeChuanShu/ncnn-android-yolov8/issues/11中的改法

你好,你这样改了后,模型是可以运行吗?效果有差别吗?

xsy0520 commented 1 year ago

为什么我在yolov8上转出来的onnx模型连predict都运行不了

lilanfei commented 1 year ago

@chenzx2 @FeiGeChuanShu 感谢两位的指导,目前没有问题已能跑起来。原因是我使用的ultralytics自带export的导出成onnx,改成用上面的代码导出就没有任何问题。

请教一下,是训练的时候用原版,导出的时候再改代码吗?改过的代码不能用于训练的对吗,语需要训练的时候再改回原版是这样吗?

wooAo commented 1 year ago

Hi, for the latest version, C2f is moved on site-packages/ultralytics/nn/modules/block.py,

Detect is moved on /site-packages/ultralytics/nn/modules/block.py

lyb36524 commented 1 year ago

Hi, for the latest version, C2f is moved on site-packages/ultralytics/nn/modules/block.py,

Detect is moved on /site-packages/ultralytics/nn/modules/block.py

In the latest version of YOLOv8, following the previous modification method, I will report an error. So is there a modification method for the latest version? 4SE)2 Q2DXEY$UB@~TXX21I

lyb36524 commented 1 year ago

i followed your instructions but my model doesnt detect anything after succesful run. can u guide me to right direction?

有时候是因为,模型训练的质量不够,试着将手机靠近或者远离,被识别的物体,不断调整角度,我现在,在问题中的更改和导出方法,是可以成功导出并且识别检测的。

lyb36524 commented 1 year ago

最新版本的yolov8,也可以这样转换,转换的时候不需要重新训练模型,使用yolo训练模型和识别的时候,需要将程序修改回原版,否则报错。新版转换方法,大家看最上面的问题贴,下面有最新的,我已经更新了。同时我也录制了一个小视频,讲解转换过程:https://www.bilibili.com/video/BV1Uo4y1F7vG/

yrik commented 1 year ago

Hey!

I have followed the steps but it does not work for me.

Here are the steps I made to export the model (https://colab.research.google.com/drive/1rZf3f7koBCyGOfZKiOutPtl0El6w6TfR?usp=sharing )

Steps:

  1. use pip3 install ultralytics==8.0.50 onnx==1.12.0 onnxruntime==1.12.0 onnx-simplifier==0.4.8 onnxsim==0.4.13
  2. Replace the code

    
    class C2f(nn.Module):
    # CSP Bottleneck with 2 convolutions
    def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5):  # ch_in, ch_out, number, shortcut, groups, expansion
        super().__init__()
        self.c = int(c2 * e)  # hidden channels
        self.cv1 = Conv(c1, 2 * self.c, 1, 1)
        self.cv2 = Conv((2 + n) * self.c, c2, 1)  # optional act=FReLU(c2)
        self.m = nn.ModuleList(Bottleneck(self.c, self.c, shortcut, g, k=((3, 3), (3, 3)), e=1.0) for _ in range(n))
    
    def forward(self, x):
        # y = list(self.cv1(x).chunk(2, 1))
        # y.extend(m(y[-1]) for m in self.m)
        # return self.cv2(torch.cat(y, 1))
        print('forward C2f')
        x = self.cv1(x)
        x = [x, x[:, self.c:, ...]]
        x.extend(m(x[-1]) for m in self.m)
        x.pop(1)
        return self.cv2(torch.cat(x, 1))
    
    def forward_split(self, x):
        y = list(self.cv1(x).split((self.c, self.c), 1))
        y.extend(m(y[-1]) for m in self.m)
        return self.cv2(torch.cat(y, 1))

class Detect(nn.Module):

YOLOv8 Detect head for detection models

dynamic = False  # force grid reconstruction
export = False  # export mode
shape = None
anchors = torch.empty(0)  # init
strides = torch.empty(0)  # init

def __init__(self, nc=80, ch=()):  # detection layer
    super().__init__()
    self.nc = nc  # number of classes
    self.nl = len(ch)  # number of detection layers
    self.reg_max = 16  # DFL channels (ch[0] // 16 to scale 4/8/12/16/20 for n/s/m/l/x)
    self.no = nc + self.reg_max * 4  # number of outputs per anchor
    self.stride = torch.zeros(self.nl)  # strides computed during build

    c2, c3 = max((16, ch[0] // 4, self.reg_max * 4)), max(ch[0], self.nc)  # channels
    self.cv2 = nn.ModuleList(
        nn.Sequential(Conv(x, c2, 3), Conv(c2, c2, 3), nn.Conv2d(c2, 4 * self.reg_max, 1)) for x in ch)
    self.cv3 = nn.ModuleList(nn.Sequential(Conv(x, c3, 3), Conv(c3, c3, 3), nn.Conv2d(c3, self.nc, 1)) for x in ch)
    self.dfl = DFL(self.reg_max) if self.reg_max > 1 else nn.Identity()

def forward(self, x):
    shape = x[0].shape  # BCHW
    for i in range(self.nl):
        x[i] = torch.cat((self.cv2[i](x[i]), self.cv3[i](x[i])), 1)
    if self.training:
        return x
    elif self.dynamic or self.shape != shape:
        self.anchors, self.strides = (x.transpose(0, 1) for x in make_anchors(x, self.stride, 0.5))
        self.shape = shape

    # if self.export and self.format == 'edgetpu':  # FlexSplitV ops issue
    #     x_cat = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2)
    #     box = x_cat[:, :self.reg_max * 4]
    #     cls = x_cat[:, self.reg_max * 4:]
    # else:
    #     box, cls = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).split((self.reg_max * 4, self.nc), 1)
    # dbox = dist2bbox(self.dfl(box), self.anchors.unsqueeze(0), xywh=True, dim=1) * self.strides
    # y = torch.cat((dbox, cls.sigmoid()), 1)
    # return y if self.export else (y, x)
    print('forward Detect')
    pred = torch.cat([xi.view(shape[0], self.no, -1) for xi in x], 2).permute(0, 2, 1)
    return pred

def bias_init(self):
    # Initialize Detect() biases, WARNING: requires stride availability
    m = self  # self.model[-1]  # Detect() module
    # cf = torch.bincount(torch.tensor(np.concatenate(dataset.labels, 0)[:, 0]).long(), minlength=nc) + 1
    # ncf = math.log(0.6 / (m.nc - 0.999999)) if cf is None else torch.log(cf / cf.sum())  # nominal class frequency
    for a, b, s in zip(m.cv2, m.cv3, m.stride):  # from
        a[-1].bias.data[:] = 1.0  # box
        b[-1].bias.data[:m.nc] = math.log(5 / m.nc / (640 / s) ** 2)  # cls (.01 objects, 80 classes, 640 img)

Export code:

from ultralytics import YOLO model = YOLO("old-best.pt") success = model.export(format="onnx", opset=12, simplify=True)


Here is my output from the export code

Ultralytics YOLOv8.0.50 🚀 Python-3.10.12 torch-2.0.1+cu118 CPU Model summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPs forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward C2f forward Detect

PyTorch: starting from old-best.pt with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 8400, 65) (6.0 MB)

ONNX: starting export with onnx 1.12.0... forward Detect forward Detect forward Detect ONNX: simplifying with onnxsim 0.4.8... ============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

ONNX: export success ✅ 1.3s, saved as old-best.onnx (11.5 MB)

Export complete (2.0s)



3. use https://convertmodel.com/ with "simply", "optimize", "f16" options enabled. Here is my params file: [old-best-sim-opt-fp16.param.txt](https://github.com/FeiGeChuanShu/ncnn-android-yolov8/files/11887677/old-best-sim-opt-fp16.param.txt)

4. I've used code https://github.com/Qengineering/YoloV8-ncnn-Raspberry-Pi-4/blob/main/yoloV8.cpp to run detection

I've receiving `1655 objects found`  on a picture where I have only one relevant object. What could be the issue? 

I've exported already trained model. Do i need to re-train it with code changes? Or only export is fine?
yrik commented 1 year ago

btw, Ultralytics is open to adding these code changes directly to the repo https://github.com/ultralytics/ultralytics/issues/3412 Unfortunately, I'm not that good with ncnn to make a PR.

Isabel-ying commented 1 year ago

。同时我也录制了一个小视频,讲解转换过程:

第二步导出onnx ,自己新建一个export.py导出就好 1679045407239_9D9CECA7-3462-4064-9379-DFE81CBCCA3F

您好,我有依照上面方法做模型转换,也另建个export.py将pt转成onnx但APP仍然闪退,有其他法子么?

duymanh-111 commented 1 year ago

Hello I faced to the same error that my custom ncnn model detects so many objects. How to convert yolov8 model to ncnn model correctly?

lpf242669 commented 10 months ago

我按照您的说明进行操作,但我的模型在成功运行后没有检测到任何东西。你能引导我走向正确的方向吗?

尝试下这个链接中的修改方法:https://github.com/DataXujing/ncnn_android_yolov8 把里面的两个都进行修改,我只修改了其中一个但是也有作用,或许两个都应该修改 或者你注意一下,我修改的两处行数,是否修改对应了

我按照这个方法但手机app端一直闪退,尝试了好几天都是如此,我的步骤是: 1.修改 modules.py 中detect部分:class C2f和class Detect 2.yolo task=detect mode=export model=yolov8n.pt format=onnx simplify=True opset=12 (尝试过不要sim或者opset13还是不行) 3.在线转换yolov8n-sim-opt-fp16.param,https://convertmodel.com/#outputFormat=ncnn,[勾选](https://convertmodel.com/#outputFormat=ncnn%EF%BC%8C%E5%8B%BE%E9%80%89) simplifier/ncnnoptimize/fp16 4.放入assets后修改yolo.cpp中为x.extract(“output0”, out)。注 param文件中最后一行为: Concat /model.22/Concat_5 2 1 /model.22/Mul_2_output_0 /model.22/Sigmoid_output_0 output0 num_class 和class name均未更改 5.运行后一直闪退 图像

你好 请问你有解决app闪退的问题吗

mzoltan77 commented 7 months ago

Hello I faced to the same error that my custom ncnn model detects so many objects. How to convert yolov8 model to ncnn model correctly?

Have you got the answer? Same problem here.... :(