google-research / ravens

Train robotic agents to learn pick and place with deep learning for vision-based manipulation in PyBullet. Transporter Nets, CoRL 2020.
https://transporternets.github.io
Apache License 2.0
562 stars 97 forks source link

Visualizing HeatMaps #9

Open anmolsrivastava97 opened 3 years ago

anmolsrivastava97 commented 3 years ago

Hello, I am exploring this repository after reading the main paper and would like to see the visualization of the produced pick and place heatmaps. Is there any way to do that. A little help on this would be highly appreciated.

DanielTakeshi commented 3 years ago

@anmolsrivastava97 You can do this by looking at the output of the neural networks. The output of the neural networks will produce an image of the same dimensions as the input, and you can then just save that as an image after some appropriate scaling of values. Here's a possible code sketch where attention is the raw output of the attention neural network:

    def get_attention_heatmap(attention):
        vis_attention = np.float32(attention).reshape((320, 160))
        vis_attention = vis_attention - np.min(vis_attention)
        vis_attention = 255 * vis_attention / np.max(vis_attention)
        vis_attention = cv2.applyColorMap(np.uint8(vis_attention), cv2.COLORMAP_RAINBOW)
        vis_attention = cv2.cvtColor(vis_attention, cv2.COLOR_RGB2BGR)
        return vis_attention

If something like this doesn't work, please let us know what you tried in more detial.

Codie-land commented 3 years ago

How would this type of visualisation work for the transport model, since it is pick-conditioned transport, so how to perform the reshaping for transport(img,p)?