Open fspindle opened 2 years ago
Following video shows the results of the dISAS demonstrator for the H2020 PULSAR project, and whom Magellium was the project coordinator. The dISAS demonstrator aims to demonstrate and provide simulation tools for large-scale, in-space assembly operations. The following video has been presented during the IROS 2020 RISAM workshop:
The ViSP library has been used for:
Final project video:
More in-depth details and results about the dPAMT demonstrator (precise assembly of mirror tiles):
Underwater demonstrator (dLSAFFE) for large scale robotic manipulation and simulated micro-gravity environment:
The following video demonstrates an accurate pointing task by a UR 10 robot on a mock-up of aircraft part. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS, in collaboration with INRIA. The demonstration includes
https://peertube.laas.fr/videos/watch/9632ae06-2466-46cf-9d4d-6f45ee8b4d91
https://user-images.githubusercontent.com/1412746/163962621-eedb67ed-0bcd-4564-9684-31d9a9636ffb.mp4
The following video demonstrates deburring operations performed by a mobile robot Tiago on a mock-up of aircraft pylon. This work was performed in the framework of the joint lab ROB4FAM between Airbus and CNRS. The demonstration includes:
https://peertube.laas.fr/videos/watch/6f40ea79-abcd-490e-a616-3a67bf297d93
https://user-images.githubusercontent.com/1412746/163962889-b35da7c0-3eb1-4610-babf-fa5affda5a8b.mp4
The following video is associated with the paper Integrating Features Acceleration in Visual Predictive Control, published in Robotics and Automation Letters.
https://user-images.githubusercontent.com/1412746/163961762-836e5936-0476-48ae-98e6-343e76cb8b0a.mp4
The following video is associated with the paper Defocus-based Direct Visual Servoing, published in the IEEE Robotics and Automation Letters in 2021.
ViSP's vpLuminance visual information was used and extended to consider the defocus variation in the control law (interaction matrix with the Laplacian of the image)
https://user-images.githubusercontent.com/1412746/164652552-c314fb8c-eaf4-449c-9666-aed2eb8eb1d9.mp4
The following video is a screen capture showing the model-based tracking of an aircraft panel to allow the TORO humanoid robot to reach a predefined pose. This video has been recorded during an integration session at DLR RMC, and for the Comanoid H2020 project.
https://user-images.githubusercontent.com/8035162/179125317-d71d6c1a-2a9c-42cd-8834-065f9058fc12.mp4
The following video is also a screen capture showing the model-based tracking of a bracket feeeder to allow the TORO humanoid robot grasp different brackets. This video has been recorded during an integration session at DLR RMC, and for the Comanoid H2020 project.
https://user-images.githubusercontent.com/8035162/179125395-a6bd13fa-dc32-43d5-8bcd-46b7e2daa568.mp4
The three following videos show eye-in-hand visual servoing to perform the assembly of an in-space primary mirror in simulation, and for the PULSAR H2020 project. The versatility of the ViSP library is demonstrated:
https://user-images.githubusercontent.com/8035162/173954027-e476afee-8fc5-4f98-a792-b213f1d4caea.mp4
https://user-images.githubusercontent.com/8035162/173954148-7982b4b6-b89a-41ee-91c3-a9cedd70d72b.mp4
https://user-images.githubusercontent.com/8035162/173954237-ca4ca0f7-d928-4071-b4c4-31fd8b5bcc00.mp4
This video shows results obtained with ViSP for a join work between COSMER lab and PRAO team at Ifremer. They mainly used the matrix representation part of VISP and visual-servoing with home-made matrices and features to control a ROV.
https://user-images.githubusercontent.com/1412746/174536460-f6f96d7d-0f4d-4d85-8325-ed45ac9f7b76.mp4
The following video shows a highly accurate automatic positioning task ( accuracy of few tens of nanometers) performed in a microrobotic workcell using a direct visual servoing method using photometry [1]. The visual system consisted of a high magnification optical microscope, while the robotic system was a lab-made microrobotic positioning platform. The whole implemented within ViSP framework.
[1]: Christophe Collewet, Eric Marchand. Photometric visual servoing. IEEE Transactions on Robotics, IEEE, 2011, 27 (4), pp.828-834.
The following multimedia illustrates a 6-DoF positioning task achieved using wavelets coefficients-based direct visual servoing. Indeed, instead of using geometric visual features in standard vision-based approaches, this controller makes use of wavelet coefficients as control signal inputs. It uses the multiple resolution coefficients representing the wavelet transform of an image in the spatial domain. The implementation was done using ViSP and the experimental evaluation was performed in different conditions of use (nominal conditions, using 2D/3D scenes, under lighting variation, and with partial occlusions).
The multimedia below illustrates a weakly calibrated three-view based visual servoing control law for laser steering in the context of surgical procedure (the final aim). It proposes to revisit the conventional trifocal constraints governing a three-view geometry for a more suitable use in the design of an efficient trifocal vision-based control. Thereby, an explicit control law is derived, without any matrix inversion or complex matrices manipulation.
Different ViSP functions were used, e.g., the vpDot visual tracker from ViSP was used to track the laser spot.
The following video shows the automatic control of a laser spot using a visual servoing approaches. This work was developed in the context of minimally invasive surgery of the middle ear by burring residual pathological tissues called cholesteatoma. The whole approach consisted of the association of an optimal path generation method based of the well-known "Traveling Salesman Problem" and an image-based visual servoing to treat the residual cholesteatoma that look like debris spread all over the middle ear cavity.
ViSP was used both for the laser spot and cholesteatoma visual tracking.
The video below illustrates a path-following method for laser steering in the context of vocal folds surgery. In this work, non-holonomic control of the unicycle model is used to implement velocity-independent visual path following for laser surgery. The developed controller was tested, in simulation as well as experimentally in several conditions of use: different initial velocities (step input, successive step inputs, sinusoidal inputs), optimized/non-optimized gains, time-varying path (simulating a patient breathing), and complex curves with curvatures. Also, the experiments were performed at 587 Hz showed an average accuracy lower than 0.22 pixels (≈ 10µm) with a standard deviation of 0.55 pixels (≈ 25µm) path following, and a relative velocity distortion of less than 10^−6%.
The following video shows the operating of a vision-based control law to achieve automatic 6-DoFs positioning tasks. The objective of this work was to be able to replace a biological sample under an optical device for a non-invasive depth examination at any given time (i.e., performing repetitive and accurate optical characterizations of the sample). The optical examination, also called optical biopsy, is performed thanks to an optical coherence tomography (OCT) system. The OCT device is used to perform a 3-dimensional optical biopsy, and as a sensor to control the robot motion during the repositioning process. The visual servoing controller uses the 3D pose of the studied biological sample estimated directly from the C-scan OCT images using a Principal Component Analysis (PCA) framework.
The following video shows on the left the model-based tracking of a circuit breaker, and on the right the model-based tracking of the HRP-4 hand. Thus, it allows servoing the robot hand and tool in order to reach a predefined pose w.r.t. one of the circuit breaker switch. This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.
https://user-images.githubusercontent.com/8035162/179122034-7e826fa1-9d55-4419-bff8-691c9a6cec12.mp4
The following video shows the model-based tracking of a circuit breaker. The combination of edges, KLT and depth features allows for a stable and robust tracking and precise pose computation of the circuit breaker. This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.
https://user-images.githubusercontent.com/8035162/179122986-a8fa19df-457d-42fa-9741-433feeb806ac.mp4
The following video shows the model-based tracking of a printer. It compares the tracking and pose computation between edges+KLT features and edges+depth features. The use of depth features, thanks to an ASUS Xtion sensor, allows for a more stable and precise tracking and pose computation. This video has been recorded during an integration session at LIRMM, and for the Comanoid H2020 project.
https://user-images.githubusercontent.com/8035162/179123612-514c6e1f-e644-48d9-ae6a-1d767eb71e9c.mp4
This thread was created to allow all ViSP users to post videos of results obtained in research, industrial, European or other projects...
It completes the videos that the team regularly publishes on the vispTeam Youtube channel and the Rainbow team's channel.
To contribute to this thread, you can indicate the name of your laboratory, company, entity, add your video or a link to the video and a short description of the video.
Feel free to contribute to this thread to promote ViSP usage.