charlesq34 / pointnet

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
Other
4.66k stars 1.44k forks source link

how to visualize the segmentation results #167

Closed heng94 closed 5 years ago

heng94 commented 5 years ago

thanks for sharing this work. I am new to semantic segmentation, can you tell me how to visualize the results and the raw point clouds? Thank you very much!

shubhamwagh commented 5 years ago

Segmentation results are dumped in log6/dump/ folder which contains ground truth and predicted ".obj" files. You can open .obj files via some free 3D mesh processing software like MeshLab etc.

heng94 commented 5 years ago

sorry, I didn't say it clearly. I mean, how to visualize the source point clouds. I am new to point clouds. Thanks for your help!

shubhamwagh commented 5 years ago

give path to ".npy" file

pc = np.load('/home/qbot/pointnet/data/stanford_indoor3d/Area_1_conferenceRoom_1.npy')

pcd = PointCloud() pcd.points = Vector3dVector(pc[:,0:3]) # XYZ points pcd.colors = Vector3dVector(pc[:,3:6]/ 255.0) #open3d requires colors (RGB) to be in range[0,1] draw_geometries([pcd])


You should see something like this for **Area_1_conferenceRoom_1.npy** as below.
![Screenshot from 2019-05-08 10-16-13](https://user-images.githubusercontent.com/22571244/57364106-640c7e80-717a-11e9-9fe7-dd72be7cf8e5.png)

I hope this helps.
heng94 commented 5 years ago

Oh, thanks, that's great! Thank you very much!

thirdlastletter commented 5 years ago

Segmentation results are dumped in log6/dump/ folder which contains ground truth and predicted ".obj" files. You can open .obj files via some free 3D mesh processing software like MeshLab etc.

Hi, after running train.py with my own data there is no dump folder. Have you encountered this problem before? Thank you

shubhamwagh commented 5 years ago

Hi @thirdlastletter , dump folder is created during batch inference. If you properly see the ReadMe file for sem_seg 1) For training it requires user to give log directory input . python train.py --log_dir log6 --test_area 6

2) It is during testing or inferencing time dump folder is created as it requires user to give dump directory python batch_inference.py --model_path log6/model.ckpt --dump_dir log6/dump --output_filelist log6/output_filelist.txt --room_data_filelist meta/area6_data_label.txt --visu

thirdlastletter commented 5 years ago

@shubhamwagh Hi thank you! One question, I have a 3fold cross validation set. I can use train.py with both my train and test data. What do I need batch_interference for then? Just for the visualization/prediction? So I need to use both train.py and batch_intereference.py on one set of train/test files, and repeat this for the other 2 versions? Thank you.

shubhamwagh commented 5 years ago

Hi @thirdlastletter So considering you have 3-fold cross validation set (e.g. datatset_0, dataset_1, dataset_2). You will have to train your model 3 times i.e. 1) train using dataset_0, dataset_1 and validate on dataset_2 python train.py --log_dir log --test_area dataset_2 2) train using dataset_0, dataset_2 and validate on dataset_1 python train.py --log_dir log --test_area dataset_1 3) train using dataset_1, dataset_2 and validate on dataset_0 python train.py --log_dir log --test_area dataset_0

Once your model is fully trained, you can use a different test-set having the same distribution as the train and validation set for inferencing and getting predictions from your learned model.

python batch_inference.py --model_path log/model.ckpt --dump_dir log/dump --output_filelist log/output_filelist.txt --room_data_filelist meta/area6_data_label.txt --visu

Yes this is mainly to get immediate predictions of the learned model and and that script dumps segmented point cloud files in order to visualize.

If you don't have test-set , it is still fine you can randomly sample from your existing 3-fold cross validation set and test on those.

thirdlastletter commented 5 years ago

@shubhamwagh Thank you so much! I understand now :)

heng94 commented 4 years ago

@shubhamwagh Oh, excuse me, I am coming again. after I trained and evaluated the ModelNet40 dataset, the dump fold just contains _predlabel.txt file. could you tell me what to do if I want to visualize the segmentation result! Thanks a lot!

shubhamwagh commented 4 years ago

Hi @HanochZzhou It looks you are checking the wrong folder. Segmentation results are dumped in the path /pointnet/sem_seg/log6/dump folder and not the pointnet/dump folder

heng94 commented 4 years ago

Hi, @shubhamwagh Thanks for your reply. In fact, I understand the whole process of semantic segmentation. But in classification and part-segmentation with ModelNet40 dataset and ShapeNet dataset, the test results just are __pred_lable.txt__ file in dump fold. Do you have any idea to visualize the classification and part-segmentation results?

shubhamwagh commented 4 years ago

Hi @HanochZzhou Please open new issue for that. Just to keep individual thread topics separated for other people. Thanks

heng94 commented 4 years ago

OK, Thanks!

kiranintellify commented 4 years ago

Hi,@shubhamwagh How to test only one file i.e office in semantic segmentation. Is it necessary to give proceessed .npy file. Can we test with office.txt file.

renmaqilong commented 4 years ago

Segmentation results are dumped in log6/dump/ folder which contains ground truth and predicted ".obj" files. You can open .obj files via some free 3D mesh processing software like MeshLab etc.

When I open .obj files via MeshLab and Cloudcompare , the color information can't be observed.

limt15 commented 4 years ago

Hi @shubhamwagh I have some trouble in visualizing the part segmentation results. Could you please help me to solve this problem? Many thanks!

limt15 commented 4 years ago
  • First download the dataset from the link mentioned in ReadeMe file.
  • Run python collect_indoor3d_data.py from sem_seg folder
  • It will create new folder stanford_indoor3d and will dumpy all source pointclouds in .npy file format.
  • To visualize them run the following script (open3d needs to be installed to visualize them)
from open3d import *
import numpy as np

#give path to ".npy" file
pc = np.load('/home/qbot/pointnet/data/stanford_indoor3d/Area_1_conferenceRoom_1.npy') 

pcd = PointCloud()
pcd.points = Vector3dVector(pc[:,0:3]) # XYZ points
pcd.colors = Vector3dVector(pc[:,3:6]/ 255.0)  #open3d requires colors (RGB) to be in range[0,1]
draw_geometries([pcd])

You should see something like this for Area_1_conferenceRoom_1.npy as below. Screenshot from 2019-05-08 10-16-13

I hope this helps.

I used this script, but there is a error: NameError: name 'PointCloud' is not defined

What is the pcd = PointCloud()? Could you help me to solve this problem?

shubhamwagh commented 4 years ago

@limt15 , recently open3d updated their api version. Kindly find the updated script.

import open3d
import numpy as np

#give path to ".npy" file
pc = np.load('/home/qbot/pointnet/data/stanford_indoor3d/Area_1_conferenceRoom_1.npy') 

pcd = open3d.geometry.PointCloud()
pcd.points = open3d.utility.Vector3dVector(pc[:,0:3]) # XYZ points
pcd.colors = open3d.utility.Vector3dVector(pc[:,3:6]/ 255.0)  #open3d requires colors (RGB) to be in range[0,1]
open3d.visualization.draw_geometries([pcd])
limt15 commented 4 years ago

@limt15 , recently open3d updated their api version. Kindly find the updated script.

import open3d
import numpy as np

#give path to ".npy" file
pc = np.load('/home/qbot/pointnet/data/stanford_indoor3d/Area_1_conferenceRoom_1.npy') 

pcd = open3d.geometry.PointCloud()
pcd.points = open3d.utility.Vector3dVector(pc[:,0:3]) # XYZ points
pcd.colors = open3d.utility.Vector3dVector(pc[:,3:6]/ 255.0)  #open3d requires colors (RGB) to be in range[0,1]
open3d.visualization.draw_geometries([pcd])

Thank you very much! I have solved this problem, and I also have visualized the .txt file successfully.

By the way, I look forward to further discussion with you in the future~~~

ZJU-PLP commented 4 years ago

Segmentation results are dumped in log6/dump/ folder which contains ground truth and predicted ".obj" files. You can open .obj files via some free 3D mesh processing software like MeshLab etc.

When I open .obj files via MeshLab and Cloudcompare , the color information can't be observed.

@renmaqilong Hi, did you slove the color problem when you visualize the obj result by using MeshLab? Could you mind sharing the method? Thanks a lot!

Yujun1212 commented 4 years ago

@limt15 Hi! I feel happy for you to see you have solved this question of visualization. I also have the same question. Could you share me the detail of this method? Thank you!

IfrahIdrees commented 3 years ago

Hi @limt15 I am happy that you solved the visualization of segmentation result. Can you please share details of how you visualize scene segmentation results?