StanfordVL / JRMOT_ROS

Source code for JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset
MIT License
145 stars 35 forks source link

labels in Range image #18

Closed rttariverdi67 closed 3 years ago

rttariverdi67 commented 3 years ago

Hi all, In order to get the range-image, I define the following transformation for each point p of point cloud with coordinates (x, y, z) Screenshot from 2021-02-05 16-00-04 where α and β are the zenith and azimuth angles respectively, ∆α and ∆β fixed-size steps generate by the resolution grid of the range-image; and x̄ and ȳ are indexes which define 2D pixel coordinates of the 2D spherical image. here also is my code:


angular = (0.5236, 2 * np.pi)     # approximation of 30º and 360º
shape = (16, 384)     #image shape
height, width = shape
y_angular, x_angular = angular
x_delta, y_delta = x_angular / width, y_angular / height

x, y, z = np.array(self.pcd.points).T
r = np.sqrt(x**2 + y**2 + z**2)

azimuth_angle = np.arctan2(-y, x) % (2 * np.pi)
elevation_angle = np.arcsin(z / r)

x_img = np.floor(azimuth_angle / self.x_delta).astype(int)
y_img = np.floor(elevation_angle / self.y_delta).astype(int)
y_img -= y_img.min()

on the other hand I have labels as cx, cy, cz, l, w, h in label_list getting from label 3D for each point cloud. (cx, cy, cz) coordinates of the center of the cuboid, (l, w, h) are the length, width and height, as mentioned here. considering this 3D labels I get all points which is located in a given cuboid and assign them as 1, the rest is assigned 0. the way that I do is :

for cx, cy, cz, l, w, h in label_list:            
      self.pcd_labels[np.all([cx - l/2 <= x, 
                              cy - w/2 <= y,
                              cz - h/2 <= z,
                              x <= cx + l/2,
                              y <= cy + w/2,
                              z <= cz + h/2
                             ], axis=0)] = 1

the result is as below, as you see there are some false in labels. a_

what can be the problem? Screenshot from 2021-02-05 16-19-50

Thanks in advance for you supports :)

abhijeetshenoi commented 3 years ago

It's a little hard to understand what the issue is here. What do the different colors mean, and what is the issue here?

rttariverdi67 commented 3 years ago

the blue color are corresponding to person point clouds, in other word I collor all points which is inside the 3D boxes which is provided in dataset. the problem is that if (cx, cy, cz) are the coordinates of the center of the cuboid and (l, w, h) are the length, width and height, so

cx - l/2 <= x, 
cy - w/2 <= y,
cz - h/2 <= z,
x <= cx + l/2,
y <= cy + w/2,
z <= cz + h/2

should give me just all points which is inside the 3D box, BUT as in pictures, some parts of body lies out of the box and insted the othe points like wall is labeled as person!

abhijeetshenoi commented 3 years ago

Could you generate a 3D visualization of the point cloud + labels? We've extensively checked the labelling and verified that it's accurate.

The issues here could be: 1) The transformation from 3D to 2D 2) It is not guaranteed that all parts of the body are within the box at all times. We approximate a constant sized 3D box for each person, and make a best effort to fit that box to the person in every annotated frame. There can be cases where portions of a person's body are outside the box, but not as much as you are seeing. I think 1) might be the cause for this.

abhijeetshenoi commented 3 years ago

Closing due to lack of an update. If this is to resolved, please reopen.