Closed ryan-summers closed 6 years ago
Can you describe to me without all the details on how it works? I mean for example does it properly translates a box put on object at different angles?
Reviewed 2 of 2 files at r1. Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Yes. This program will always properly draw a bounding box around any orientation/angled object. Below is an explanation of the 3D mathematics of how this is done.
All the details - \
Flow of information: 1) The program receives the link_state of the left_camera and obstacles in the pool. 2) A 3D bounding box is drawn around obstacles based on provided width, height, and depth params. 2) Program converts the coordinates of obstacles and their bounding vertices to be from the frame of reference to the camera. 3) The 3D coordinates are mapped onto the fisheye lens of the camera. 4) The mapped coordinates then go through the same process that the undistortion node applies, which means they are identical to coordinates on the undistorted image. 5) A bounding box is drawn around the coordinates. 6) DetectionArray message is published.
Reviewed 1 of 2 files at r1, 1 of 2 files at r2, 1 of 1 files at r3. Review status: 2 of 3 files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Reviewed 1 of 1 files at r4. Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Reviewed 5 of 5 files at r5. Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
Reviewed 4 of 5 files at r5, 1 of 1 files at r6. Review status: all files reviewed at latest revision, 7 unresolved discussions, some commit checks broke.
param/left_calibration.json, line 1 at r6 (raw file):
{
Is there a reason you're using json here rather than yaml?
param/simulator.yaml, line 26 at r6 (raw file):
height: 1.5 width: 0.1 depth: 0.1
This set of parameters feels a bit long-winded, is it possible to make these more concise?
scripts/sim_vision_network.py, line 61 at r6 (raw file):
lens_parameters = FisheyeLens(c1=685.4, c2=525.2, c3=0, f=307.53251966606937)
Where are these default parameters coming from?
scripts/sim_vision_network.py, line 64 at r6 (raw file):
class Point:
Why not use a 3-tuple?
scripts/sim_vision_network.py, line 178 at r6 (raw file):
def change_reference_frame(point, rot_quat, origin):
FYI, utilizing TF's tool set would make this function and quite a few others here unnecessary and possibly easier to use. The SD team has already done some work with publishing TF frames using links from gazebo.
scripts/sim_vision_network.py, line 467 at r6 (raw file):
if __name__ == '__main__': rospy.init_node('fake_network', anonymous=True)
Is there a reason this is anonymous? You already modify the name in the launch file.
scripts/sim_vision_network.py, line 473 at r6 (raw file):
debug = rospy.get_param('~debug', default=False) max_distance = rospy.get_param('~max_disatance', default=10)
distance
rather than disatance
Comments from Reviewable
Review status: all files reviewed at latest revision, 7 unresolved discussions.
param/left_calibration.json, line 1 at r6 (raw file):
Is there a reason you're using json here rather than yaml?
This is the format that I wrote the calibration script to export in. I can update both if you would like. YAML definitely has its perks.
param/simulator.yaml, line 26 at r6 (raw file):
This set of parameters feels a bit long-winded, is it possible to make these more concise?
There may be a way to grab the link geometry from Gazebo, but that may be more complicated. These are all the required pieces of information for a label.
scripts/sim_vision_network.py, line 61 at r6 (raw file):
Where are these default parameters coming from?
They come from the characteristics of the fisheye camera model that we have in the cobalt SDF model.
scripts/sim_vision_network.py, line 64 at r6 (raw file):
Why not use a 3-tuple?
I wanted access to the parameters in x, y, and z notation.
scripts/sim_vision_network.py, line 467 at r6 (raw file):
Is there a reason this is anonymous? You already modify the name in the launch file.
It needs to run on both the left and right cameras (as per stereo vision requirements)
scripts/sim_vision_network.py, line 473 at r6 (raw file):
`distance` rather than `disatance`
good call
Comments from Reviewable
Reviewed 1 of 1 files at r7. Review status: all files reviewed at latest revision, 2 unresolved discussions.
scripts/sim_vision_network.py, line 61 at r6 (raw file):
They come from the characteristics of the fisheye camera model that we have in the cobalt SDF model.
Could you put a comment above this to denote where these are in that SDF?
Comments from Reviewable
Review status: 4 of 5 files reviewed at latest revision, 2 unresolved discussions.
scripts/sim_vision_network.py, line 178 at r6 (raw file):
FYI, utilizing TF's tool set would make this function and quite a few others here unnecessary and possibly easier to use. The SD team has already done some work with publishing TF frames using links from gazebo.
Done.
Comments from Reviewable
Reviewed 1 of 1 files at r8. Review status: all files reviewed at latest revision, all discussions resolved.
Comments from Reviewable
The fake vision network may be used under the following scenarios: 1) User does not have NVidia cards 2) The network is not trained on certain props 3) Vision data needs to be labelled for training
The tool utilizes the gazebo/link_states topic and passes the points through the appropriate image filtering algorithms to determine the bounding box of objects.
This change is