Closed MohamedAssanhaji closed 4 months ago
Hi,
Concerning the 1st point:
Concerning the 2nd point, most works leverage pose estimation or keypoints extraction to perform VS as a downstream task. You can find more information in my thesis's state of the art (shameless promotion :))
Sam
Hey Sam,
Thanks for the comprehensive breakdown! It's awesome to see all these possibilities laid out. I'll definitely dive into those tutorials and resources you mentioned (like today !). Your work seems like a goldmine for me. And hey, no shame in promoting your thesis if it's packed with valuable insights :) Looking forward to exploring more of your work (added you on likendin too XD )
Cheers!
Mohamed
I've been using VISP for traditional IBVS with 2D cameras and various robots for pick and place tasks in challenging environments for a couple of years now. Following the trend in many papers, such as "An Image-Based Visual Servo Approach with Deep Learning for Robotic Manipulation" by Jingshu Liu and Yuan Li, I'm planning to integrate a CNN into my visual servoing algorithm. Currently, I'm working with different UR robots and an Intel RealSense depth camera 435 D (both compatible with VISP).
I have a couple of questions:
1) Can we integrate a pre-trained CNN into VISP algorithms? If yes, which specific algorithm among the many available?
2) Are there any existing or ongoing algorithms that utilize other DNN-based approaches for feature extraction in visual servoing?