Open hhaAndroid opened 2 years ago
Support deployment based on MMDeploy
Pick: Added a script to verify whether the installation was successful
Add a script to support to convert yolo-style *.txt format to coco in PR#161
Pick :
Can you post a video tutorial on how to customize module embedding into the network? Take the Transformer module in the original yolov5 library for example, how to integrate the module designed by myself into the network. I want to do a series of experiments based on mmyolo, but don't konw how to modify the file clearly.
Are you planning to support yolov5
1280 pretrained models, yolov6
m/l models and yolov7
training?
Are you planning to support
yolov5
1280 model,yolov6
m/l model andyolov7
training?
Hi @fcakyon All you mention were already in progress, will release soon😄
Are you planning to support
yolov5
1280 model,yolov6
m/l model andyolov7
training?Hi @fcakyon All you mention were already in progress, will release soon😄
Great news! I am the maintainer of sahi, can wait to support mmyolo once these features are released 💯
Are you planning to support
yolov5
1280 model,yolov6
m/l model andyolov7
training?Hi @fcakyon All you mention were already in progress, will release soon😄
Great news! I am the maintainer of sahi, can wait to support mmyolo once these features are released 💯
Hi @fcakyon I am the one who want to support sahi in MMYOLO, What a coincidence ! We can keep in touch 😄
Hi @fcakyon I am the one who want to support sahi in MMYOLO, What a coincidence ! We can keep in touch 😄
Wow, great coincidence 😮 Let's keep in touch 🚀
Hi @fcakyon I am the one who want to support sahi in MMYOLO, What a coincidence ! We can keep in touch smile
Wow, great coincidence open_mouth Let's keep in touch rocket
Hi, We plan to integrate sahi in v0.1.3. The current plan is:
how do you feel? @fcakyon
@hhaAndroid sounds great! What is your implementation plan? Do you want to include it as a tool or as a separate model? How can I help you with this?
@hhaAndroid sounds great! What is your implementation plan? Do you want to include it as a tool or as a separate model? How can I help you with this?
Let's open an issue to discuss!
Can you arrange an urgent arrangement for yolo-pose?
Can you arrange an urgent arrangement for yolo-pose?
Hi~ For yolo-pose, are you referring the yolo-pose from the texas instruments or another version? If there is a specific version you would like MMYOLO to be integrated, can you add a comment in issue#233 ? Thx!!
Add more result analysis functions. Refer to https://github.com/dbolya/tide.
I want to use the model implemented by mmyolo in the config file that I have implemented through mmdetection. For example, I want to add one of mmyolo's backbones to the complete model defined by mmdetection. Is it possible? What should I do?
I want to use the model implemented by mmyolo in the config file that I have implemented through mmdetection. For example, I want to add one of mmyolo's backbones to the complete model defined by mmdetection. Is it possible? What should I do?
Hi @chihuajiao, sorry for the late reply. You could refer to the tutorial to implemente it.
Hi! I found that both MMYOLO and MMDetection contains implementations of YOLQX-s. What are the differences between the two. In our experiments, the AP-10K data set in MMPose, which also contains bounding boxes information like COCO are applied. But there is a big gap between the results of MMYOLO and MMetection. I would like to know the differences between the two implementations. Thanks a lot for your time!
Hi! I found that both MMYOLO and MMDetection contains implementations of YOLQX-s. What are the differences between the two. In our experiments, the AP-10K data set in MMPose, which also contains bounding boxes information like COCO are applied. But there is a big gap between the results of MMYOLO and MMetection. I would like to know the differences between the two implementations. Thanks a lot for your time!
Hi, I could only say something about the MMPose bbox detection you mentioned. Most of the algorithms you could find in MMPose are two-stage heatmap detections, which require human bboxes detection in their first stage. That's why there is COCO bbox information used in the implementations.
Hi! I found that both MMYOLO and MMDetection contains implementations of YOLQX-s. What are the differences between the two. In our experiments, the AP-10K data set in MMPose, which also contains bounding boxes information like COCO are applied. But there is a big gap between the results of MMYOLO and MMetection. I would like to know the differences between the two implementations. Thanks a lot for your time!
Hi, I could only say something about the MMPose bbox detection you mentioned. Most of the algorithms you could find in MMPose are two-stage heatmap detections, which require human bboxes detection in their first stage. That's why there is COCO bbox information used in the implementations.
Thanks for your reply ! I may not explain my question clearly. I actually know that two main paradigms of pose estimation. I would like to use MMYolo or MMDetection to finish the animal object detection task. I just use the AP-10K data set supplied in MMPose that support the same format like COCO. I used YoloX-s to finish the object detection by MMDetection and MMYolo respectively. The results of the two frame work should be very close but in fact there is a big gap between the two results, about 10 AP. I guess that there are some differences in implementations of YoloX-s between the two object detection frame work.
Hi! I found that both MMYOLO and MMDetection contains implementations of YOLQX-s. What are the differences between the two. In our experiments, the AP-10K data set in MMPose, which also contains bounding boxes information like COCO are applied. But there is a big gap between the results of MMYOLO and MMetection. I would like to know the differences between the two implementations. Thanks a lot for your time!
Hi, I could only say something about the MMPose bbox detection you mentioned. Most of the algorithms you could find in MMPose are two-stage heatmap detections, which require human bboxes detection in their first stage. That's why there is COCO bbox information used in the implementations.
Thanks for your reply ! I may not explain my question clearly. I actually know that two main paradigms of pose estimation. I would like to use MMYolo or MMDetection to finish the animal object detection task. I just use the AP-10K data set supplied in MMPose that support the same format like COCO. I used YoloX-s to finish the object detection by MMDetection and MMYolo respectively. The results of the two frame work should be very close but in fact there is a big gap between the two results, about 10 AP. I guess that there are some differences in implementations of YoloX-s between the two object detection frame work.
Well, I still can't tell why there is such a difference on your results. For the YOLOX implementations in MMDet and MMYOLO, the only difference is on the mosaic
part, which you could check here and here. If you still have questions on the implementation and final performance, you are welcomed to open a new issue with your benchmark table so that the maintainers may be able to find out more details.
I implemented yolox-pose based on mmyolo here, https://github.com/Bovey0809/mmyolo-pose
I implemented yolox-pose based on mmyolo here, https://github.com/Bovey0809/mmyolo-pose
That is a pretty good job !
Hi ! Does mmyolo support for mmselfsup ? I wanna use moco to self_supervise train a pretrained model . And then use the pretrained model on yolov5 . Does this shit can be realized ?
Hi ! Does mmyolo support for mmselfsup ? I wanna use moco to self_supervise train a pretrained model . And then use the pretrained model on yolov5 . Does this shit can be realized ?
Hi,@arkerman Of course, MMYOLO supports for MMSelfsup. Here's the example, https://mmyolo.readthedocs.io/en/latest/recommended_topics/replace_backbone.html#use-backbone-network-implemented-in-mmselfsup.
Add distillation example in yolo serial
Add distillation example in yolo serial
Hi @www516717402, here's a distillation example of RTMDet, https://github.com/open-mmlab/mmyolo/tree/main/configs/rtmdet/distillation.
V0.5.0(2023.1)
[ ] code
- [ ] Support YOLOv8 instance seg (YOLOv8 支持实例分割)
- [ ] Added a script to verify whether the installation was successful (新增快速验证是否安装成功的脚本) [Feature] Add a script to verify whether the installation was successful #487
Hi, does mmyolo support YOLOv8 ins-seg now? When will it support if not yet? I just saw a commit and failed to train YOLOv8 ins-seg model following the code in this commit. :sob:
What is the current state of yolov8 ins-seg support? Is it in progress or the plans for support of yolov8 were abandoned?
Add quantization for MMYOLO will be a great feature. YOLO is already fast and relatively accurate. But, with quantization techniques (like Int8, ...), it will be much more powerful !
我想在我通过 mmdetection 实现的配置文件中使用 mmyolo 实现的模型。例如,我想将 mmyolo 的一个主干添加到由 mmdetection 定义的完整模型中。可能吗?我该怎么办?
嗨,很抱歉回复晚了。您可以参考教程来实现它。
Hi, the link is not available, could you offer a new link about how to realise mmyolo on mmdetection?
Add quantization for MMYOLO will be a great feature. YOLO is already fast and relatively accurate. But, with quantization techniques (like Int8, ...), it will be much more powerful !
Hi~ Based on my knowledge, you may find some quantization demos in MMDeploy, as in most scenarios, I believe people will use weight quantization while deploying the models. Meanwhile, MMRazor supports the distillation and sparsification algorithms.
Hi~ Based on my knowledge, you may find some quantization demos in MMDeploy, as in most scenarios, I believe people will use weight quantization while deploying the models. Meanwhile, MMRazor supports the distillation and sparsification algorithms.
@xin-li-67 Thanks for your reply. AFAIK, MMDeploy doesn't support MMYOLO and I couldn't find any demo/documentation/code in MMRazor that works on YOLOv8. Feel free to correct me if I am wrong.
Hello @hhaAndroid !, Can you help me out with the support of yolox-ins-head!
Any plan to add YOLOv10?
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
V0.5.0(2023.1)
V0.2.0(2022.11)
YOLOv6
MLX model(支持YOLOV6
MLX 模型) https://github.com/open-mmlab/mmyolo/pull/265PPYOLOE
training mAP(对齐PPYOLOE
训练精度) https://github.com/open-mmlab/mmyolo/pull/259YOLOv7
training mAP(对齐YOLOv7
训练精度) https://github.com/open-mmlab/mmyolo/pull/243 https://github.com/open-mmlab/mmyolo/pull/310sahi
repo (集成sahi
) https://github.com/open-mmlab/mmyolo/issues/230 https://github.com/open-mmlab/mmyolo/pull/284demo/featmap_vis_demo.py
script supports input image folder and url referencedemo/demo_image.py
(参考demo/demo_image.py
脚本 为demo/featmap_vis_demo.py
支持文件夹和 url 输入) https://github.com/open-mmlab/mmyolo/pull/248image_demo.py
influence result formate to labelme label files(支持image_demo.py
结果导出 labelme 格式的标签文件) https://github.com/open-mmlab/mmyolo/pull/288train + val + test
ortrainval + test
annotation files (支持划分大的 COCO 标签文件为train + val + test
或trainval + test
标签文件) https://github.com/open-mmlab/mmyolo/pull/311YOLOv5+ ResNet50
Self-supervised training withmmselfsup
for weighted in How-to documents (How-to 新增 YOLOv5+ ResNet50 使用 mmselfsup 自监督训练的权重文档) https://github.com/open-mmlab/mmyolo/pull/291mim
to call mmdet or other mm series repo script (新增如何通过 mim 跨库调用其他 OpenMMLab 脚本的文档) https://github.com/open-mmlab/mmyolo/pull/321Collected features
中文视频资源
汇总地址: https://github.com/open-mmlab/mmyolo/blob/dev/docs/zh_cn/article.md
工具类
特征图可视化.ipynb
基础类
实用类
10分钟换遍主干网络.ipynb
源码解读类