Open Feiyuyu0503 opened 1 month ago
Hello @Feiyuyu0503, 🌟
Thank you for your suggestion and for sharing these interesting projects with us! Visual prompting for auto-labeling is indeed a cutting-edge technique that has the potential to greatly enhance the labeling process.
We appreciate the links to the repositories for Segment-Everything-Everywhere-All-At-Once and DINOv. These models are quite innovative, and the idea of integrating similar functionality into X-AnyLabeling is exciting.
As you've mentioned, the Trex model is currently commercial and closed-source, which poses a challenge for direct integration. However, we are keeping a close eye on the development of open-source alternatives that could offer comparable capabilities. We believe that with the rapid advancement of research in this area, it's only a matter of time before we see more accessible models that we can integrate into X-AnyLabeling.
We will continue to monitor the progress of these and other related projects. If and when a suitable open-source model becomes available, we will certainly consider adding support for visual prompting in auto-labeling to X-AnyLabeling.
Thank you for your patience and for your contribution to the community. We're always looking for ways to improve and expand our toolset, and feedback like yours is invaluable. 🙏
Stay tuned for updates, and please feel free to share any more thoughts or findings you come across in this domain!
Best regards, X-AnyLabeling Maintainer
以下是两个公开了ckpt的研究 https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once https://github.com/UX-Decoder/DINOv 期望利用这些模型,做到类似 trex 的效果