epiception / CalibNet

[DEPRECATED] Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks
https://epiception.github.io/CalibNet/
MIT License
215 stars 57 forks source link

Some questions about your paper #6

Closed gogojjh closed 6 years ago

gogojjh commented 6 years ago

Hi, I just read your IROS2018 paper "CalibNet: Self-Supervised Extrinsic Calibration using 3D Spatial Transformer Networks" and have a specific question:

  1. considering the loss function design (equ.(2) and equ.(5)), how do you get the ground truth ($D{gt}$, $X{exp}$) to train your network?

Thanks!

epiception commented 6 years ago

I don't see your issue, the ground truth semi-dense depth maps are available by projecting the LIDAR point cloud from the dataset to the camera plane. Please check out the databuilder scripts for more details.

Cheers!

Chrislzy1993 commented 5 years ago

@gogojjh ,hi, did you figure it out? i have the same question too, confused about the paper, how does the paper realize self-supervised, how can it make true to move on a good direction??

epiception commented 5 years ago
  1. the paper was re-titled Geometrically supervised Extrinsic Calibration, just to be clear that it was not absolute self-supervision.
  2. However, coming to the point, we use the dataset, but consider the case where you have a calibrated stereo pair that is not calibrated with a Lidar. You can still use the training methodology to train with (noisy) depth from the stereo pair. It goes without saying that you can align input lidar clouds and images at test time without issue. Our approach continues to be unique in terms of the loss function and training methodology.