Closed prashanthr05 closed 2 years ago
Thanks to the help of @Yeshasvitvs, a dataset with iCub equipped with D435 Realsense (without IMU) was collected while doing some walking experiments in the Vicon room. I will update the details soon.
Had to go for smaller aspect ratios due to GIF sizes.
Experiment | GIF (trimmed, real-time) |
---|---|
Arbitrary walking (142) | |
Walking Front and Back (154) |
Experiment 142 - Joypad controlled walking Experiment 154 - Fixed-, manual- forward and backward walking commands through rpc
The post-processed datasets contain vicon, encoders, IMU (root link), feet wrenches and RGBD images. The Vicon, encoder, IMU, and wrench measurements are appropriately sub-sampled synced to encoder measurements. While the images data remain at a much lower rate. (1 image for every 4/5 proprioceptive measurements). Vicon trajectory is aligned with the estimator frame of reference with single state trajectory alignment by assuming the estimator reference frame to exist on the robot's foot at the first time instant.
Datasets for experiments 148 and 150 which are similar to 154 are also available.
Realsense D435 Intrinsics used for the experiment data collection.
Intrinsic of "Color" / 640x480 / {YUYV/RGB8/BGR8/RGBA8/BGRA8/Y16}
Width: 640
Height: 480
PPX: 323.237915039062
PPY: 238.748596191406
Fx: 618.967163085938
Fy: 619.085632324219
Distortion: Inverse Brown Conrady
Coeffs: 0 0 0 0 0
FOV (deg): 54.68 x 42.38
We might need to plan for an experiment to collect a fresh dataset from the robot inclusive of all necessary sensors and ground truth information.
P.S. Linking an issue from a private repository below (only members of
dic-iit
organization have access).We had conducted some experimental campaigns in the past to collect whole-body distributed sensors data from
iCubGenova04
. However, it might be the case that none of these acquired datasets might completely match the validation for KinDynVIO. In the past, we collected a base estimation basic dataset along with stereo cameras from the head and an externally mounted realsense camera while the robot walked forward in an environment with aruco markers. However, there might be some problems with the recorded time stamps (specifically, we do not have received time stamps, only transmitted timestamps. Further, we do not have a Vicon ground truth. In any case, this might be the only dataset that would remain as close to being the best to validate KinDynVIO without conducting a new experiment.cc @HosameldinMohamed