Closed makengi closed 3 years ago
I'm curious about this, too.
Hi @makengi ,
My first-sight hunch is that the detection results and the point clouds are on different coordinate system/units. Is that the case?
The detected objects and their tracks are far apart than the point cloud clusters. The markers seem to be placed on void cells which will ideally never happen since detections (even false detections) will be on valid point cloud data and not on a void space.
Thank you for the reply @praveen-palanisamy
oh, i see.
then could you tell me how to change the coordinate system/units for resolve this problem
do i have to change coordinate after read /filtered_cloud topic ?????
You may just have to define a global fixed frame of reference and use S.I units for your point clouds. From the warnings (yellow font with exclamation icons) on your RViz window (zoomed view below) it looks like you don't have a Transform tree root defined for a global reference frame. Please fix that and try again.
Thank you for the reply again @praveen-palanisamy . And I have one more question. I want to detect and classify objects to differentiate between cars and people or bicycles. Is there a method that can be applied in the code? I saw about the k-mean in the code, but I don't know how to apply it.plz any advice for me
This package helps to detect and classify (& track) objects. Differentiating between object instances or to identify attributes of specific object classes (cars, pedestrians, bicycles etc) requires object specific feature/attribute detection which is outside the scope of this package. You could use the output from this package and train a classifier to differentiate between cars, people, bicycles etc. There are several opensource labelled datasets available that you can use to train your car/pedestrian/bicycle pointcloud classifer and object detector. Links to some of the widely used datasets are provided below:
You could use any of the above datasets and train a model using any of the 3D point cloud classification methods (deep neural network architectures) from the papers with code repository like PointNet++ to get started.
Closing this since the questions were answered. Please re-open if you need more assistance.
I am trying to recognize an object using the VLP-16 Velodyne 3D Lidar sensor. After replacing the /filtered_cloud topic with /velodyne_points I successfully read the topic and visualized it in rviz. However, it looks like it doesn't detect and tracking the object properly. After reading the topic, do I need a point cloud filtering process other than RANSAC or KDTree? If so, could you please give me some advice on the point cloud filtering process?
The first gltf below is the RVIZ screen of the point cloud and marker array second gltf is the RVIZ screen of the cluster_points and marker array