Closed deepdata-foundation closed 4 years ago
There is no code inside this project to generate planes. If you want to do inference on test images you have to calculate them on your own or set them e.g. to [0 -1 0 1.65] which will only work on even and smooth road scenes with no hills. If you have a test vehicle you can calculate the ground planes by accessing your IMU data (yaw, pitch, and roll). First create a rotation matrix from the three angles R = R_yaw R_pitch R_roll. Then assume you have an even plane that is defined by the normal vector n=[0,-1,0]. You can then rotate the normal vector by the rotation matrix to get the normal vector of the actual plane: n_new = R * n Finally you can look up the height of your camera which will be your distance to the plane (forth parameter).
@walzimmer
Thanks. I might wonder set eg.g to [0 -1 0 1.65] may not possible re-produce the test performance[kitti-3d]. Is there a plane folder for share to re-produce the performance of your project.
@deepearthgo
@deepearthgo A better method to calculate the ground plane is to use the PyntCloud library and the RANSAC method:
cloud = PyntCloud.from_file(path_in + file) is_floor = cloud.add_scalar_field("plane_fit", n_inliers_to_stop=len(cloud.points) / 30, max_dist=0.001) cloud.points = cloud.points[cloud.points["is_plane"] > 0] three_points = cloud.get_sample("points_random", n=3, as_PyntCloud=False) three_points_np = [] for i in range(len(three_points)): three_points_np.append(np.array([three_points["x"][i], three_points["y"][i], three_points["z"][i] + 1.293])) vector_one = three_points_np[1] - three_points_np[0] vector_two = three_points_np[2] - three_points_np[0] normal = np.cross(vector_one, vector_two) normal_normalized = normal / np.linalg.norm(normal)
Note that I have hard coded the height of the camera to 1.293. If you have a camera with a different height like in KITTI (1.65) then use that height as the third parameter.
@walzimmer Did you use the same plane for all images in your submission to KITTI? Or you calculate it for each pointclound?
@YunzeMan I have calculated the planes for every frame and each point cloud scan. Using only one plane for all frames will not work because you also have hilly road scenes (road inclination).
You can also deactivate the usage of ground planes since it is optional: The method get_point_filter in obj_utils.py takes the ground plane as parameter and it says:
:param ground_plane: Optional, coefficients of the ground plane
Just pass None instead of the ground plane coefficients.
Then use only the extents filter to filter out points: avod_cars_example.conf area_extents: [-40, 40, -5, 3, 0, 70]
@walzimmer The ground plane can effect the 3d box project on image. Is that can be ignore when visualize 3d box on image?
As you can see, I hard code the ground plane and the box is a little above the ground.
@walzimmer This codes:
cloud = PyntCloud.from_file(path_in + file)
is_floor = cloud.add_scalar_field("plane_fit", n_inliers_to_stop=len(cloud.points) / 30, max_dist=0.001)
cloud.points = cloud.points[cloud.points["is_plane"] > 0]
three_points = cloud.get_sample("points_random", n=3, as_PyntCloud=False)
three_points_np = []
for i in range(len(three_points)):
three_points_np.append(np.array([three_points["x"][i], three_points["y"][i], three_points["z"][i] + 1.293]))
vector_one = three_points_np[1] - three_points_np[0]
vector_two = three_points_np[2] - three_points_np[0]
normal = np.cross(vector_one, vector_two)
normal_normalized = normal / np.linalg.norm(normal)
it can not work, if you reading .bin point cloud file
is it possible to do ground removing before AVOD and feed the points without ground into AVOD model? Is there any flag to disable using ground plane in AVOD?
@walzimmer This codes:
cloud = PyntCloud.from_file(path_in + file) is_floor = cloud.add_scalar_field("plane_fit", n_inliers_to_stop=len(cloud.points) / 30, max_dist=0.001) cloud.points = cloud.points[cloud.points["is_plane"] > 0] three_points = cloud.get_sample("points_random", n=3, as_PyntCloud=False) three_points_np = [] for i in range(len(three_points)): three_points_np.append(np.array([three_points["x"][i], three_points["y"][i], three_points["z"][i] + 1.293])) vector_one = three_points_np[1] - three_points_np[0] vector_two = three_points_np[2] - three_points_np[0] normal = np.cross(vector_one, vector_two) normal_normalized = normal / np.linalg.norm(normal)
it can not work, if you reading .bin point cloud file
I tested this code in python 3 and it's ok. I now want to implement it in python 2 for ros but the pyntcloud library doesn't support python 2. What do you suggest?
@jinfagang maybe you should modify the bin.py , kitti used the format of pointcloud as x,y,z,i which should be reshape as (-1,4)
@mikon1995 Old issue, it's np.float32 type problem. otherwise it wont read length dividable by 4 so can not reshape
@jinfagang have you generated ground planes successful now? i tried the code above but i dont think it is accurate enough... the test accuracy on kitti is bad.
@mikon1995 Ground estimation probably detecting from point cloud with pcl (same process with filtering out ground points, you can get a plane) which is pretty much closer to real plane surface. Generating plane is time-consuming hard to take into production though.
@jinfagang
To start training, run the following:
python avod/experiments/run_training.py --pipeline_config=avod/configs/avod_cars_example.config
The following error occurred!!Do you know how to solve it?Thank you!
No checkpoints found Traceback (most recent call last): File "/home/scw4750/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1327, in _do_call return fn(*args) File "/home/scw4750/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1310, in _run_fn self._extend_graph() File "/home/scw4750/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1358, in _extend_graph graph_def.SerializeToString(), status) File "/home/scw4750/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 516, in exit c_api.TF_GetCode(self.status.status)) tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'MaxBytesInUse' with these attrs. Registered devices: [CPU], Registered kernels: device='GPU'
[[Node: MaxBytesInUse = MaxBytesInUse[]()]] InvalidArgumentError (see above for traceback): No OpKernel was registered to support Op 'MaxBytesInUse' with these attrs. Registered devices: [CPU], Registered kernels: device='GPU'
[[Node: MaxBytesInUse = MaxBytesInUse[]()]]
There is no code inside this project to generate planes. If you want to do inference on test images you have to calculate them on your own or set them e.g. to [0 -1 0 1.65] which will only work on even and smooth road scenes with no hills. If you have a test vehicle you can calculate the ground planes by accessing your IMU data (yaw, pitch, and roll). First create a rotation matrix from the three angles R = R_yaw R_pitch R_roll. Then assume you have an even plane that is defined by the normal vector n=[0,-1,0]. You can then rotate the normal vector by the rotation matrix to get the normal vector of the actual plane: n_new = R n Finally you can look up the height of your camera which will be your distance to the plane (forth parameter). thank U, I have a question on generation of planes from IMU data, here is numpy.dot() or numpy.multiply? because the Rotation matrix R is a 33 dimension, the vector n is 13 dimension, so if we use numpy.dot in n_new = Rn , we need to transpose vector n, no? can you tell me why we can use n_new = Rn to calcule the plane, please.
@jinfagang have you generated ground planes successful now? i tried the code above but i dont think it is accurate enough... the test accuracy on kitti is bad.
for i in range(len(three_points)): three_points_np.append(np.array([three_points["x"][i], three_points["y"][i], three_points["z"][i] + 1.293])) vector_one = three_points_np[1] - three_points_np[0] vector_two = three_points_np[2] - three_points_np[0] normal = np.cross(vector_one, vector_two) normal_normalized = normal / np.linalg.norm(normal)
Is normal_normalized represented the ground plane? I found that is list, demension is 3
There is no code inside this project to generate planes. If you want to do inference on test images you have to calculate them on your own or set them e.g. to [0 -1 0 1.65] which will only work on even and smooth road scenes with no hills. If you have a test vehicle you can calculate the ground planes by accessing your IMU data (yaw, pitch, and roll). First create a rotation matrix from the three angles R = R_yaw R_pitch R_roll. Then assume you have an even plane that is defined by the normal vector n=[0,-1,0]. You can then rotate the normal vector by the rotation matrix to get the normal vector of the actual plane: n_new = R * n Finally you can look up the height of your camera which will be your distance to the plane (forth parameter).
Hello! What methods are used to genertate planes.txt? you said according to IMU(yaw pitch, roll), others said accoding to lidar cloud points. but, which method is acccurate?
hi. hi. I would like to use this with ROS too. And is it possible to use the code with latest CUDA 10 and cuddn 7.1? Also this this code can give me the individual orientation in real time of the detected objects?
Planes #19 ROS #83 #171
Hi, is these a code to generate planes for test file.