Open GuicarMIS opened 1 year ago
Transfer of defocus-based DVS ongoing on branch DefocusDVS. New DVS program created, thin lens camera model added. TODO: adapt CFeatureDefocusedLuminance so that it derives vpFeatureLuminance and ensures its correct use in the vpServo task.
CFeatureDefocusedLuminance added. A last simulation test to do to compare with photometric VS using the same camera parameters (Yakumo lens) before trying Defocus-based DVS on UR10 robot too.
Defocus-based DVS on UR10 robot tested. OK But poor performances due to latencies in the new Spinnaker-based Flir camera capture (see issue https://github.com/jrl-umi3218/DirectVisualServoing/issues/6).
Thanks to solving issue https://github.com/jrl-umi3218/DirectVisualServoing/issues/6, performances are good.
On the video, when the program is started the robot goes to the desired articular configuration, takes the desired image (click to continue), moves to the initial pose and starts to capture current images (click to continue) and output commands for the robot to reach back the desired image (thus pose) until the number of iterations set in the code is reached. After that the program outputs all the recorded data (images, residuals, etc).
The recorded data (images, residuals, etc) are in the directory "resultat"
derivate ViSP’s photometric VS to (existing but slightly differently developed) defocus-based and non-linear scale space-based (actually nothing to do on the VS side :wink: just need to prepare the desired image so it might just be code copy-paste for me). Guillaume validates on JRL’s devices and Belinda validates on MIS’ devices.