dougsm / ggcnn

Generative Grasping CNN from "Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach" (RSS 2018)
BSD 3-Clause "New" or "Revised" License
499 stars 140 forks source link

About the code #24

Closed leo4707 closed 4 years ago

leo4707 commented 4 years ago
q_img = q_img.cpu().numpy().squeeze()
ang_img = (torch.atan2(sin_img, cos_img) / 2.0).cpu().numpy().squeeze()
width_img = width_img.cpu().numpy().squeeze() * 150.0

q_img = gaussian(q_img, 2.0, preserve_range=True)
ang_img = gaussian(ang_img, 2.0, preserve_range=True)
width_img = gaussian(width_img, 1.0, preserve_range=True)

question1: What does the q_img mean? question2: What is the effect about "cpu().numpy().squeeze()"?

dougsm commented 4 years ago

The network has 3 outputs, the quality (Q), the angle and the grasp width (see https://arxiv.org/pdf/1804.05172.pdf). q_img corresponds to the quality output.

cpu().numpy().squeeze() are operations on the pytorch tensor to move the data from the gpu to cpu turn it into a 300x300 numpy array before doing any further processing. You can find out more in the pytorch documentation, e.g. cpu().numpy().squeeze()

Hope that helps :-)

leo4707 commented 4 years ago

Thanks to your explain.

leo4707 commented 4 years ago

@dougsm how about '--split' mean?