AutoLidarPerception / SqueezeSeg

Implementation of SqueezeSeg, convolutional neural networks for LiDAR point clout segmentation https://arxiv.org/abs/1710.07368
BSD 2-Clause "Simplified" License
23 stars 7 forks source link

*.npy dataset preprocess procedure #1

Open Durant35 opened 6 years ago

Durant35 commented 6 years ago
Durant35 commented 6 years ago

点云信息

image

[0, mc.NUM_CLASS-1] 个类标签

image

Durant35 commented 6 years ago

LiDAR硬件特性

image

Durant35 commented 6 years ago

Refers

gyubeomim commented 6 years ago

awesome work!

it's working well on my environment settings

why don't you do pull request to make it public? :-)

Durant35 commented 6 years ago

@tigerk0430 Thanks for your reply, but still under bug-fix step, stay tuned ;-).

Lapayo commented 5 years ago

Hey, I tried using your preprocessing - however my data seems to have a lot more artifacts than the original preprocessing. Am I doing anything wrong or are the preprocessings different?

Edit: It looks like https://github.com/BichenWuUCB/SqueezeSeg/issues/37

Durant35 commented 5 years ago

Code from https://github.com/BichenWuUCB/SqueezeSeg/issues/37#issuecomment-503385002

def lidar_to_2d_front_view_3(points, v_res=26.9/64,
h_res=0.17578125
# h_res=0.08
):

x_lidar = points[:, 0]  # -71~73
y_lidar = points[:, 1]  # -21~53
z_lidar = points[:, 2]  # -5~2.6
r_lidar = points[:, 3]  # Reflectance  0~0.99

# Distance relative to origin
d = np.sqrt(x_lidar ** 2 + y_lidar ** 2 + z_lidar ** 2)

# Convert res to Radians
v_res_rad = np.radians(v_res)
h_res_rad = np.radians(h_res)

# PROJECT INTO IMAGE COORDINATES
# 这里的负号去掉后,图片会左右颠倒,但是为什么之前是反的?
# -1024~1024   -3.14~3.14  ;
x_img_2 = np.arctan2(-y_lidar, x_lidar)#  得到水平角度
# 用它求得的值域只有上面的一半?因为r始终是正数,导致反面的和正面的投影都在一起了
# x_img_2 = -np.arcsin(y_lidar/r)  # 水平转角  -1.57~1.57

angle_diff = np.abs(np.diff(x_img_2))
threshold_angle = np.radians(250)  #
angle_diff = np.hstack((angle_diff, 0.001)) # 补一个元素,diff少了一个
angle_diff_mask = angle_diff > threshold_angle
# print('angle_diff_mask',np.sum(angle_diff_mask), threshold_angle)

x_img = np.floor((x_img_2 / h_res_rad)).astype(int)  # 把角度转换为像素坐标
x_img -= np.min(x_img)  # 把坐标为负数的部分做一个转移
# x_img[x_lidar < 0] = 0  # 只取x大于0的部分,因为小于0的部分相当于是雷达后面的视角
# 不是我们需要的数据,并且arcsin 计算会重复;

# -52~10  -0.4137~0.078
# y_img_2 = -np.arctan2(z_lidar, r) #
# 这个值域没有变化,但是需要加一个负号图像才是正的,不然上下颠倒

# y_img_2 = -np.arcsin(z_lidar/d) # 得到垂直方向角度
# y_img = np.round((y_img_2 / v_res_rad)).astype(int)  # # 把角度转换为像素坐标
# y_img -= np.min(y_img) # 把坐标为负数的部分做一个转移
# y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制

y_img[y_img >= 64] = 63 # 有可能会超出64根线,需要做一点限制

x_max = int(360.0 / h_res) + 1  # 投影后图片的宽度
# x_max = int(180.0 / h_res) + 1  # 投影后图片的宽度

# 根据论文的5维特征赋值
depth_map = np.zeros((64, x_max, 5))#+255
depth_map[y_img, x_img, 0] = x_lidar
depth_map[y_img, x_img, 1] = y_lidar
depth_map[y_img, x_img, 2] = z_lidar
depth_map[y_img, x_img, 3] = r_lidar
depth_map[y_img, x_img, 4] = d

# 抽取最中间的90度视角数据,也就是512个像素的宽度,64高度的数据
start_index = int(x_max/2 - 256)
result = depth_map[:, start_index:(start_index+512), :]

np.save(os.path.join('../data/samples/0001-3' + '.npy'), result)

print('write 0001-2')`