dsikar / cleargrasp

Cloned from git@github.com:Shreeyak/cleargrasp.git
Apache License 2.0
2 stars 0 forks source link

Cloned from https://github.com/Shreeyak/cleargrasp

Code modified to work with a Zivid One+ RGB-D camera. Key change is modification of camera intrinsics, in live_demo/zivid_live_demo.py, lines 53 through 56, commit hash #7862762:

    realsense_fx = 2763.10400390625 # 923.93823 # camera_intrinsics[0, 0]
    realsense_fy = 2763.77685546875 # 923.2997 # camera_intrinsics[1, 1]
    realsense_cx = 963.131469726562 # 651.2283 # camera_intrinsics[0, 2]
    realsense_cy = 595.361694335938 # 373.53592 # camera_intrinsics[1, 2]

and scaling of inputs, at line 117, same file and commit:

 color_img, input_depth = get_zivid_rgb_depth()

get_zivid_rgb_depth is defined in live_demo/zivid_utils.py, same hash commit, where aperture and exposure time settings are modified from the values used for D415 input camera to values found empirically to work better for the Zivid One+. Finally the depth data is scaled to bring the Zivid One+ depth data distribution closer to the D415 distribution:

(...)
    settings.acquisitions[0].aperture = 2.6 # 5.6
    settings.acquisitions[0].exposure_time = datetime.timedelta(microseconds=11500) #8333)
(...)
    sf = 2.5 / np.amax(zivid_input_depth) # approximate maximum observed in D415 depth divided by zivid maximum
    scaled_zivid_input_depth = zivid_input_depth * sf
(...)    

ClearGrasp: 3D Shape Estimation of Transparent Objects for Manipulation

Welcome to the official repository for the ClearGrasp paper. ClearGrasp leverages deep learning with synthetic training data to infer accurate 3D geometry of transparent objects from a single RGB-D image. The estimated geometry can be directly used for downstream robotic manipulation tasks (e.g. suction and parallel-jaw grasping).

This repository provides:

Resources : PDF | Website - Video, Dataset & Results

Authors: Shreeyak S Sajjan, Matthew Moore, Mike Pan, Ganesh Nagaraja, Johnny Lee, Andy Zeng, Shuran Song

Publication: International Conference on Robotics and Automation (ICRA), 2020

Download Data - Training Set
Download Data - Testing and Validation Set
Download Model checkpoints









Transparent objects possess unique visual properties that make them incredibly difficult for standard 3D sensors to produce accurate depth estimates for. In many cases, they often appear as noisy or distorted approximations of the surfaces that lie behind them. To address these challenges, we present ClearGrasp – a deep learning approach for estimating accurate 3D geometry of transparent objects for robotic manipulation. The experiments demonstrate that ClearGrasp is substantially better than monocular depth estimation baselines and is capable of generalizing to real-world images and novel objects. We also demonstrate that ClearGrasp can be applied out-of-the-box to improve grasping algorithms’ performance on transparent objects.
Given a single RGB-D image of transparent objects, ClearGrasp first uses the color image as input to deep convolutional networks to infer a set of information: surface normals, occlusion boundaries. The mask is used to "clean" the input depth by removing all points corresponding to transparent surfaces. ClearGrasp then uses a global optimization algorithm which uses surface normals and occlusion boundaries to reconstruct the depth of the transparent objects.

pipeline

Method Overview

Contact:

If you have any questions or find any bugs, please file a github issue or contact me:
Shreeyak Sajjan: shreeyak[dot]sajjan[at]gmail[dot]com

Installation

This code is tested with Ubuntu 16.04, Python3.6 and Pytorch 1.3, and CUDA 9.0.

System Dependencies

sudo apt-get install libhdf5-10 libhdf5-serial-dev libhdf5-dev libhdf5-cpp-11
sudo apt install libopenexr-dev zlib1g-dev openexr
sudo apt install xorg-dev  # display widows
sudo apt install libglfw3-dev

LibRealSense (Optional)

If you want to run demos with an Intel RealSense camera, you may need to install LibRealSense. It is required to stream and capture images from Intel Realsense D415/D435 stereo cameras.
Please check the installation guide to install from binaries, or compile from source.

# Register the server's public key:
$ sudo apt-key adv --keyserver keys.gnupg.net --recv-key C8B3A55A6F3EFCDE || sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-key C8B3A55A6F3EFCDE

# Ubuntu 16 LTS - Add the server to the list of repositories
$ sudo add-apt-repository "deb http://realsense-hw-public.s3.amazonaws.com/Debian/apt-repo xenial main" -u

# Install the libraries
$ sudo apt-get install librealsense2-dkms
$ sudo apt-get install librealsense2-utils

# Install the developer and debug packages
$ sudo apt-get install librealsense2-dev
$ sudo apt-get install librealsense2-dbg

Setup

  1. Clone the repository. A small sample dataset of 3 real and 3 synthetic images is included.

    git clone git@github.com:Shreeyak/cleargrasp.git
  2. Install pip dependencies by running in terminal:

    pip install -r requirements.txt
  3. Download the data:
    a) Model Checkpoints (0.9GB) - Trained checkpoints of our 3 deeplabv3+ models.
    b) Train dataset (Optional, 72GB) - Contains the synthetic images used for training the models. No real images were used for training.
    c) Val + Test datasets (Optional, 1.7GB) - Contains the real and synthetic images used for validation and testing.

    Extract these into the data/ directory or create symlinks to the extracted directories in data/.

  4. Compile depth2depth (global optimization):

    depth2depth is a C++ global optimization module used for depth completion, adapted from the DeepCompletion project. It resides in the api/depth2depth/ directory.

    • To compile the depth2depth binary, you will first need to identify the path to libhdf5. Run the following command in terminal:

      find /usr -iname "*hdf5.h*"

      Note the location of hdf5/serial. It will look similar to: /usr/include/hdf5/serial/hdf5.h.

    • Edit BOTH lines 28-29 of the makefile at api/depth2depth/gaps/apps/depth2depth/Makefile to add the path you just found as shown below:

      USER_LIBS=-L/usr/include/hdf5/serial/ -lhdf5_serial
      USER_CFLAGS=-DRN_USE_CSPARSE "/usr/include/hdf5/serial/"
    • Compile the binary:

      cd api/depth2depth/gaps
      export CPATH="/usr/include/hdf5/serial/"  # Ensure this path is same as read from output of `find /usr -iname "*hdf5.h*"`
      
      make

      This should create an executable, api/depth2depth/gaps/bin/x86_64/depth2depth. The config files will need the path to this executable to run our depth estimation pipeline.

    • Check the executable, by passing in the provided sample files:

      cd api/depth2depth/gaps
      bash depth2depth.sh

      This will generate gaps/sample_files/output-depth.png, which should match the expected-output-depth.png sample file. It will also generate RGB visualizations of all the intermediate files.

To run the code:

1. ClearGrasp Quick Demo - Evaluation of Depth Completion of Transparent Objects

We provide a script to run our full pipeline on a dataset and calculate accuracy metrics (RMSE, MAE, etc). Resides in the directory eval_depth_completion/.

2. Live Demo

We provide a demonstration of how to use our API on images streaming from realsense D400 series camera. Each new frame coming from the camera stream is passed through the depth completion module to obtain completed depth of transparent objects and the results are displayed in a window.
Resides in the folder live-demo/. This demo requires the Librealsense SDK to be installed.

  1. Create a copy of the sample config file:

    cd live_demo
    cp config/config.yaml.sample config/config.yaml
  2. Edit config.yaml with paths to checkpoints of networks and depth2depth executable. Edit parameters as per your camera.

  3. Compile realsense.cpp:

    cd live-demo/realsense/
    mkdir build
    cd build
    cmake ..
    make

    This will create a binary build/realsense which is used to stream images from the realsense camera over TCP/IP. In case of issues, check FAQ.

  4. Connect a realsense d400 series camera to USB and start the camera stream:

    cd live_demo/realsense
    ./build/realsense

    This application will capture RGB and Depth images from the realsense and stream them on an TCP/IP port. It will also open a window with the RGB and Depth images displayed.

  5. Run demo:

    python live_demo.py -c config/config.yaml

    This will open a new window displaying input image, input depth, intermediate outputs (surface normals, occlusion boundaries, mask), modified input depth and output depth. Expect around 1 FPS with an i7 7700K CPU and 1080ti GPU. The global optimization module is CPU bound and takes almost 1 sec per image at 256x144p resolution with CPU at 4.2GHz.

3. Training Code

The folder pytorch_networks/ contains the code used to train the surface normals, occlusion boundary and semantic segmentation models.

4. Dataset Capture GUI

Contains GUI application that was used to collect dataset of real transparent objects. First the transparent objects were placed in the scene along with various random opaque objects like cardboard boxes, decorative mantelpieces and fruits. After capturing and freezing that frame, each object was replaced with an identical spray-painted instance. Subsequent frames would be overlaid on the frozen frame so that the overlap between the spray painted objects and the transparent objects they were replacing could be observed. With high resolution images, sub-millimeter accuracy can be achieved in the positioning of the objects.

Run the dataset_capture_gui/capture_image.py script to launch a window that streams images directly from a Realsense D400 series camera. Press 'c' to capture the transparent frame, 'v' to capture the opaque frame and spacebar to confirm and save the RGB and Depth images for both frames.

FAQ

Details on depth2depth

The depth2depth executable expects the following parameters:

Calculation of focal len in pixels (fx, fy)

The focal len in pixels is calculated from the Field of View and Sensor Size of camera, as derived from here:

F = A / tan(a)
  Where,
    F = Focal len in pixels
    A = image_size/2
    a = FOV/2

=> (focal len in pixels) = ((image width or height)/2 ) / tan( FOV/2 )

Here are the calculation for our synthetic images, with angles in degrees for image output at 288x512p:

Fx = (512 / 2) / tan( 69.40 / 2 ) = 369.71 = 370 pixels
Fy = (288 / 2) / tan( 42.56 / 2 ) = 369.72 = 370 pixels

Notes on data:

  1. The 4x4 transformation matrix for each object in the scene can give incorrect rotations since it is not normalized. Use the provided quaternion to get the rotation of each object.
  2. Some objects are present in the scene, but not visible to the camera. Your code will have to account for such objects when parsing through the data, using the provided masks.

ERROR: No module named open3d

In case of Open3D not being recognized, try installing with:

pip uninstall open3d-python
pip uninstall open3d
pip install open3d --no-cache-dir

FIX for librealsense version V2.15 and earlier

Change the below line:

// Find and colorize the depth data
rs2::frame depth_colorized = color_map.colorize(aligned_depth);

to

// Find and colorize the depth data
rs2::frame depth_colorized = color_map(aligned_depth);

ERROR: depth2depth.cpp:11:18: fatal error: hdf5.h: No such file or directory

Make sure HDF5 is installed. Ensure you edited both lines in the makefile to add path to hdf5, as per directions in Installation section.
Make sure you exported CPATH before compiling depth2depth, as mentioned above (export CPATH="/usr/include/hdf5/serial/").

ERROR: /usr/bin/ld: cannot find -lrealsense2

You may face this error when compiling realsense.cpp. This may occur when using later versions of librealsense (>=2.24, circa Jun 2019).
This error can be resolved by compiling Librealsense from source. Please follow the official instructions.

HOW to change the image resolution streaming from realsense camera?

You can change the image resolution by changing the corresponding lines in the live_demo/realsense/realsense.cpp file and re-compiling realsense:

int stream_width = 640;
int stream_height = 360;
int depth_disparity_shift = 25;
int stream_fps = 30;

Also change the following lines in the live_demo/realsense/camera.py file to match the cpp file:

self.im_height = 360
self.im_width = 640
self.tcp_host_ip = '127.0.0.1'
self.tcp_port = 50010