jhu-lcsr / costar_plan

Integrating learning and task planning for robots with Keras, including simulation, real robot, and multiple dataset support.
https://sites.google.com/site/costardataset
Apache License 2.0
69 stars 23 forks source link

Grasp Vizualization: Evaluate pixel relative offset at every pixel with single gripper pose offset model #381

Open ahundt opened 6 years ago

ahundt commented 6 years ago

We need to be able to generate a 3D visualization of many poses and the predicted grasp success values to determine if the results look reasonable.

Here is the model file to load for visualization: 2018-01-20-06-41-24_grasp_model_weights-delta_depth_sin_cos_3-grasp_model_levine_2016-dataset_062_b_063_072_a_082_b_102-epoch-014-val_loss-0.641-val_acc-0.655.h5.zip

Here is the updated scene file: 2018-01-20-0630-kukaRemoteApiCommandServerExample.ttt.zip

TODO:

V-REP code steps:

Remember, we will need to get from the full sized images to the small images and back!

Bonus features that would help, but are not required:

TensorBoard steps:

Gradient visualization:

ahundt commented 6 years ago

This visualization enhancement was suggested by @cpaxton, could you add your thoughts on this issue description?

ahundt commented 6 years ago

Initial work in https://github.com/cpaxton/costar_plan/pull/429, this issue is still in progress.