aminebdj / OpenYOLO3D

Our OpenYOLO3D model achieves state-of-the-art performance in Open Vocabulary 3D Instance Segmentation on ScanNet200 and Replica datasets with up ∼16x speedup compared to the best existing method in literature.
64 stars 4 forks source link

Code for visualizations #5

Closed shjung13 closed 2 months ago

shjung13 commented 3 months ago

Hi authors,

Thank you for releasing the code for this great work!

I was able to successfully reproduce the performance in the paper.

But, I still wonder if there is any way that I can visualize the results.

Do you have any plan to release codes for visualizations?

or Would you give me a sample script / code that I can develop on for the visualizations?

Please feel free to contact through my email if you prefer email over github issue for sharing the code!

Email: shjung13 [at] cs.washington.edu

Thank you!

Best, Sanghun

aminebdj commented 3 months ago

dear @shjung13,

Thank you for your interest in our work,

I updated the repository, so please pull it again. If you already downloaded the replica dataset we provided you can use one of the sample point clouds there, I provided a sample code. For visualization, you can use pyviz3d you can install it with the following command python -m pip install pyviz3d. In my case, I used Blender software directly as it is more flexible; you can just load the output.ply file there.

from utils import OpenYolo3D
import os
import pyviz3d.visualizer as viz
from models.Mask3D.mask3d import load_mesh_or_pc
import numpy as np

openyolo3d = OpenYolo3D(f"{os.getcwd()}/pretrained/config.yaml")
prediction = openyolo3d.predict(path_2_scene_data=f"{os.getcwd()}/data/replica/office0", depth_scale=6553.5, text = ["chair"]) 
openyolo3d.save_output_as_ply(f"{os.getcwd()}/output.ply") 

pc = load_mesh_or_pc(f"{os.getcwd()}/output.ply", datatype="point cloud")
point_size = 35.0
v = viz.Visualizer(position=[5, 5, 1])
v.add_points('Prediction', np.asarray(pc.points), np.asarray(pc.colors)*255.0, point_size=point_size, visible=True)

blender_args = {'output_prefix': './',
                  'executable_path': '/Applications/Blender.app/Contents/MacOS/Blender'}
v.save('example_meshes', blender_args=blender_args)

you can find the same script in ./single_scene_inference.py

Please let me know if you face any issues

shjung13 commented 2 months ago

Thank you so much for the quick reply! I can see them correctly :)