drprojects / superpoint_transformer

Official PyTorch implementation of Superpoint Transformer introduced in [ICCV'23] "Efficient 3D Semantic Segmentation with Superpoint Transformer" and SuperCluster introduced in [3DV'24 Oral] "Scalable 3D Panoptic Segmentation As Superpoint Graph Clustering"
MIT License
560 stars 72 forks source link

How to save to visualizable pointcloud ? #136

Closed IsraelAbebe closed 1 month ago

IsraelAbebe commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes.

Thank you in advance for the answer and awesome project.

zeejja commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes.

Thank you in advance for the answer and awesome project.

Hello, I am also stuck in the visualization phase. I don't know how to visualize my predictions on the test data. After running the command: '' python src/eval.py experiment=/ ckpt_path=/path/to/your/checkpoint.ckpt ''

An eval folder is created and contains the following files: Capture

but I don't know where the prediction results are saved nor how to visualize them. If you could guide us in this regard, it would be helpful.

MenglinQiu commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes.

Thank you in advance for the answer and awesome project.

It is possible to visualize the sampled point cloud directly. The super index of each point is stored in nag[0].super_index, and the point cloud coordinates are stored in nag[0].pos. The prediction result is actually the label of the node(superpoint) in nag[1](or nag[1+]). Or re-read the nag file to get the full point cloud.

IsraelAbebe commented 2 months ago

@MenglinQiu the pipeline I was working for is to use external tool to visualize and I wanted to save predictions as ply file with the segmentation properties attached.

Or even save the segmentation output

MenglinQiu commented 2 months ago

@MenglinQiu the pipeline I was working for is to use external tool to visualize and I wanted to save predictions as ply file with the segmentation properties attached.

Or even save the segmentation output

I think I understand what you mean. We can get the node label of nag[1] from the prediction result "pred", so the label of the point cloud in nag[0] can be obtained through nag[0].super_index and "pred". The values ​​in the array nag[0].super_index correspond to the index of the node in nag[1], and the length of nag[0].super_index corresponds to the number of sampled point clouds. The order of the point cloud nag[0].pos is consistent with the order of nag[0].super_index. The coordinates of the sampled point cloud can be obtained through nag[0].pos. With the coordinate information and label information of the points, just save them as ply files. For example: pred = [0,1,2,3,2], nag[0].super_index = [0,0,1,1,2,2,3,3,4,4,4,2]. This means that after sampling, 12 points are retained, belonging to 5 super points. According to the known conditions, the labels of these 10 points should be label=pred[super_index]=[0,0,1,1,2,2,3,3,2,2,2,2].

IsraelAbebe commented 2 months ago

Thank you I will try that ,

Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ?

I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

MenglinQiu commented 2 months ago

Thank you I will try that ,

Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ?

I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

I think the labels you saved do not correspond to the actual objects. The spt's output is the labels of super points. Super points contain multiple points and need to be expanded to get specific point labels.

MenglinQiu commented 2 months ago

Thank you I will try that ,

Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ?

I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

I tried it and it was correct. The pred is obtained by network when eval. The specific code is as follows. ` def step_single_run_inference(self, nag): """Single-run inference """ y_hist = self.step_get_y_hist(nag) logits = self.forward(nag) if self.multi_stage_loss: preds = torch.argmax(logits[0], dim=1) else: preds = torch.argmax(logits, dim=1)

   # Then
    import numpy as np
    lab = preds.cpu().numpy()
    pos = nag[0].pos[:].cpu().numpy()
    super_index = nag[0].super_index.cpu().numpy()
    labels = lab[super_index]
    data = np.concatenate((pos,labels.reshape(-1,1)),axis=1)
    np.savetxt("xyz.txt",data)`

You can add it where you need it. 图片

MenglinQiu commented 2 months ago

If you want to get a complete point cloud, you only need to save the pos and super_index of the original nag[0] befor the sampling, or re-read the h5 file of the input scene.

ImaneTopo commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes.

Thank you in advance for the answer and awesome project.

Hi, I want to ask you how did you get "Nag outpt", because in my case I did run the training and test but I don't know how to visualise the predictions

IsraelAbebe commented 2 months ago

port numpy as np lab = preds.cpu().numpy() pos = nag[0].pos[:].cpu().numpy() super_index = nag[0].super_index.cpu().numpy() labels = lab[super_index] data = np.concatenate((pos,labels.reshape(-1,1)),axis=1) np.savetxt("xyz.txt",data)`

I will try this and update you.

thank you so much for the help

MenglinQiu commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hi, I want to ask you how did you get "Nag outpt", because in my case I did run the training and test but I don't know how to visualise the predictions

The source code does not seem to provide a way to save the prediction result 'pred', and the user needs to modify it.

ImaneTopo commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hi, I want to ask you how did you get "Nag outpt", because in my case I did run the training and test but I don't know how to visualise the predictions

The source code does not seem to provide a way to save the prediction result 'pred', and the user needs to modify it.

Where should I exactly modify?

MenglinQiu commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hi, I want to ask you how did you get "Nag outpt", because in my case I did run the training and test but I don't know how to visualise the predictions

The source code does not seem to provide a way to save the prediction result 'pred', and the user needs to modify it.

Where should I exactly modify?

I noticed that the author provided the code in notebooks(Figure 1). You need to run it separately and add a function to save the data. Or modify it in the source code file if you want to save the test results during the testing phase after training. I found a possible location with the data you want.(Figure 2) . 图片 图片

ImaneTopo commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hi, I want to ask you how did you get "Nag outpt", because in my case I did run the training and test but I don't know how to visualise the predictions

The source code does not seem to provide a way to save the prediction result 'pred', and the user needs to modify it.

Where should I exactly modify?

I noticed that the author provided the code in notebooks(Figure 1). You need to run it separately and add a function to save the data. Or modify it in the source code file. I found a possible location with the data you want.(Figure 2) . 图片 图片

in the guithub, there's no this files : demo_dales.ipynb et demo_s3dis.ipynb on notebook, could you share it with me! image

MenglinQiu commented 2 months ago

in the guithub, there's no this files : demo_dales.ipynb et demo_s3dis.ipynb on notebook, could you share it with me! image

In the source code file you downloaded, it is named demo.ipynp. The author updated it

ImaneTopo commented 2 months ago

in the guithub, there's no this files : demo_dales.ipynb et demo_s3dis.ipynb on notebook, could you share it with me! image

In the source code file you downloaded, it is named demo.ipynp. The author updated it

OKAY and the segmentation.py I have to created myself

MenglinQiu commented 2 months ago

in the guithub, there's no this files : demo_dales.ipynb et demo_s3dis.ipynb on notebook, could you share it with me!

In the source code file you downloaded, it is named demo.ipynp. The author updated it

OKAY and the segmentation.py I have to created myself

I think you just need to add the code for saving the results here(function test_step). But considering that you have already completed the training, why not get the test set results through demo.ipyny? 图片

ImaneTopo commented 2 months ago

in the guithub, there's no this files : demo_dales.ipynb et demo_s3dis.ipynb on notebook, could you share it with me! image

In the source code file you downloaded, it is named demo.ipynp. The author updated it

OKAY and the segmentation.py I have to created myself

I think you just need to add the code for saving the results here(function test_step). 图片

Could I add this part in the code of panoptic.py, because I am working on panoptic

MenglinQiu commented 2 months ago

I think you just need to add the code for saving the results here(function test_step). 图片

Could I add this part in the code of panoptic.py, because I am working on panoptic

Of course, it's up to you. But considering that you have already completed the training, why not get the test set results through demo.ipynb?

ImaneTopo commented 2 months ago

I think you just need to add the code for saving the results here(function test_step). 图片

Could I add this part in the code of panoptic.py, because I am working on panoptic

Of course, it's up to you. But considering that you have already completed the training, why not get the test set results through demo.ipynb?

yes thet's what I'm doing, I run demo.ipynb but here it prints for me my chekpoint None

image

MenglinQiu commented 2 months ago

@ImaneTopo If your ckpt file exists, then this is a path problem. As you know, different IDEs have different requirements. If you cannot set the relative path correctly by make_checkpoint_file_search_widght(), try assigning it directly through the absolute path.

ImaneTopo commented 2 months ago

@ImaneTopo If your ckpt file exists, then this is a path problem. As you know, different IDEs have different requirements. If you cannot set the relative path correctly by make_checkpoint_file_search_widght(), try assigning it directly through the absolute path.

I tried this but it doesn't work image

image

MenglinQiu commented 2 months ago

@ImaneTopo If your ckpt file exists, then this is a path problem. As you know, different IDEs have different requirements. If you cannot set the relative path correctly by make_checkpoint_file_search_widght(), try assigning it directly through the absolute path.

I tried this but it doesn't work image

image

Did you assign an absolute path to the ckpt_widget.value?

图片

MenglinQiu commented 2 months ago

图片 图片

it works.

ImaneTopo commented 2 months ago

图片 图片

it works.

yes that's the point, it does'nt show selected checkpoint even if I alredy have it in logs image

ImaneTopo commented 2 months ago

图片 图片

it works.

could the problem be here ? image

zeejja commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hello, I am also stuck in the visualization phase. I don't know how to visualize my predictions on the test data. After running the command: '' python src/eval.py experiment=/ ckpt_path=/path/to/your/checkpoint.ckpt ''

An eval folder is created and contains the following files: Capture

but I don't know where the prediction results are saved nor how to visualize them. If you could guide us in this regard, it would be helpful.

@MenglinQiu Could you please help me save my predictions so I can visualize them in CloudCompare? I am stuck on this part. If you could provide your email, I would be grateful.

MenglinQiu commented 2 months ago

I was able to get NAG output from the notebooks and i was wondering how i can save them to point cloud data to visualize the predictions. i tried to save segmentation points to numpy npz but because of the sampling done i couldn't match the shapes. Thank you in advance for the answer and awesome project.

Hello, I am also stuck in the visualization phase. I don't know how to visualize my predictions on the test data. After running the command: '' python src/eval.py experiment=/ ckpt_path=/path/to/your/checkpoint.ckpt '' An eval folder is created and contains the following files: Capture but I don't know where the prediction results are saved nor how to visualize them. If you could guide us in this regard, it would be helpful.

@MenglinQiu Could you please help me save my predictions so I can visualize them in CloudCompare? I am stuck on this part. If you could provide your email, I would be grateful.

Hi, you can contact me via swd5359313@gmail.com OR 645193498@qq.com

MenglinQiu commented 2 months ago

Thank you I will try that ,

Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ?

I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

@IsraelAbebe I overlooked one point in my answer above. During the preprocessing process, spt performs voxel sampling through GridSampling3D, which may cause the total amount of the original point cloud to be inconsistent with the point cloud saved in the nag file. However, I think this is easy to solve. When preprocessing the test set, just discard the voxel sampling.

IsraelAbebe commented 2 months ago

Thank you I will try that ,

Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ?

I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

@IsraelAbebe I overlooked one point in my answer above. During the preprocessing process, spt performs voxel sampling through GridSampling3D, which may cause the total amount of the original point cloud to be inconsistent with the point cloud saved in the nag file. However, I think this is easy to solve. When preprocessing the test set, just discard the voxel sampling.

Let me try that Thank you

IsraelAbebe commented 2 months ago

Thank you I will try that , Final question. Is there a way to use the full point cloud and not the sampled one in the pipeline so I can predict on the full points ? I saved those outputs and ply.points is 50000 but my prediction was 5000. (Not exact numbers but I think you get my points)

@IsraelAbebe I overlooked one point in my answer above. During the preprocessing process, spt performs voxel sampling through GridSampling3D, which may cause the total amount of the original point cloud to be inconsistent with the point cloud saved in the nag file. However, I think this is easy to solve. When preprocessing the test set, just discard the voxel sampling.

even doing this seems to give reduced values


def main(path,model_config,ckpt_path,output_path):
    print(f"file path: {path}, \n model config: {model_config}\n")
    data = read_kitti360_window(path)

    print(f"number of points: {data.num_points}\n keys: {data.keys}")

    cfg = init_config(overrides=[f"experiment={model_config}"]) 

    transforms_dict = instantiate_datamodule_transforms(cfg.datamodule)
    print(f"Data transorms : {transforms_dict}")

    # Apply pre-transforms
    nag = transforms_dict['pre_transform'](data)

    # Simulate the behavior of the dataset's I/O behavior with only
    # `point_load_keys` and `segment_load_keys` loaded from disk
    from src.transforms import NAGRemoveKeys
    nag = NAGRemoveKeys(level=0, keys=[k for k in nag[0].keys if k not in cfg.datamodule.point_load_keys])(nag)
    nag = NAGRemoveKeys(level='1+', keys=[k for k in nag[1].keys if k not in cfg.datamodule.segment_load_keys])(nag)

    # Move to device
    nag = nag.cuda()

    # Apply on-device transforms
    nag = transforms_dict['on_device_test_transform'](nag)

#     cfg = init_config(overrides=[f"experiment=semantic/dales"])

    # Instantiate the model and load pretrained weights
    model = hydra.utils.instantiate(cfg.model)
    model = model._load_from_checkpoint(ckpt_path)

    # Set the model in inference mode on the same device as the input
    model = model.eval().to(nag.device)

    # Inference, returns a task-specific ouput object carrying predictions
    with torch.no_grad():
        output = model(nag)

    pred = output.semantic_pred

    lab = pred.cpu().numpy()
    pos = nag[0].pos[:].cpu().numpy()
    super_index = nag[0].super_index.cpu().numpy()
    labels = lab[super_index]
    data = np.concatenate((pos,labels.reshape(-1,1)),axis=1)

    np.savetxt("xyz.txt",data)

input : 500000, output: 330453

MenglinQiu commented 2 months ago

even doing this seems to give reduced values

def main(path,model_config,ckpt_path,output_path):
    print(f"file path: {path}, \n model config: {model_config}\n")
    data = read_kitti360_window(path)

    print(f"number of points: {data.num_points}\n keys: {data.keys}")

    cfg = init_config(overrides=[f"experiment={model_config}"]) 

    transforms_dict = instantiate_datamodule_transforms(cfg.datamodule)
    print(f"Data transorms : {transforms_dict}")

    # Apply pre-transforms
    nag = transforms_dict['pre_transform'](data)

    # Simulate the behavior of the dataset's I/O behavior with only
    # `point_load_keys` and `segment_load_keys` loaded from disk
    from src.transforms import NAGRemoveKeys
    nag = NAGRemoveKeys(level=0, keys=[k for k in nag[0].keys if k not in cfg.datamodule.point_load_keys])(nag)
    nag = NAGRemoveKeys(level='1+', keys=[k for k in nag[1].keys if k not in cfg.datamodule.segment_load_keys])(nag)

    # Move to device
    nag = nag.cuda()

    # Apply on-device transforms
    nag = transforms_dict['on_device_test_transform'](nag)

#     cfg = init_config(overrides=[f"experiment=semantic/dales"])

    # Instantiate the model and load pretrained weights
    model = hydra.utils.instantiate(cfg.model)
    model = model._load_from_checkpoint(ckpt_path)

    # Set the model in inference mode on the same device as the input
    model = model.eval().to(nag.device)

    # Inference, returns a task-specific ouput object carrying predictions
    with torch.no_grad():
        output = model(nag)

    pred = output.semantic_pred

    lab = pred.cpu().numpy()
    pos = nag[0].pos[:].cpu().numpy()
    super_index = nag[0].super_index.cpu().numpy()
    labels = lab[super_index]
    data = np.concatenate((pos,labels.reshape(-1,1)),axis=1)

    np.savetxt("xyz.txt",data)

input : 500000, output: 330453

@IsraelAbebe ok. Are you experimenting on the kitti360 dataset? I might know the cause of the problem. Let me verify it

MenglinQiu commented 2 months ago

even doing this seems to give reduced values

def main(path,model_config,ckpt_path,output_path):
    print(f"file path: {path}, \n model config: {model_config}\n")
    data = read_kitti360_window(path)

    print(f"number of points: {data.num_points}\n keys: {data.keys}")

    cfg = init_config(overrides=[f"experiment={model_config}"]) 

    transforms_dict = instantiate_datamodule_transforms(cfg.datamodule)
    print(f"Data transorms : {transforms_dict}")

    # Apply pre-transforms
    nag = transforms_dict['pre_transform'](data)

    # Simulate the behavior of the dataset's I/O behavior with only
    # `point_load_keys` and `segment_load_keys` loaded from disk
    from src.transforms import NAGRemoveKeys
    nag = NAGRemoveKeys(level=0, keys=[k for k in nag[0].keys if k not in cfg.datamodule.point_load_keys])(nag)
    nag = NAGRemoveKeys(level='1+', keys=[k for k in nag[1].keys if k not in cfg.datamodule.segment_load_keys])(nag)

    # Move to device
    nag = nag.cuda()

    # Apply on-device transforms
    nag = transforms_dict['on_device_test_transform'](nag)

#     cfg = init_config(overrides=[f"experiment=semantic/dales"])

    # Instantiate the model and load pretrained weights
    model = hydra.utils.instantiate(cfg.model)
    model = model._load_from_checkpoint(ckpt_path)

    # Set the model in inference mode on the same device as the input
    model = model.eval().to(nag.device)

    # Inference, returns a task-specific ouput object carrying predictions
    with torch.no_grad():
        output = model(nag)

    pred = output.semantic_pred

    lab = pred.cpu().numpy()
    pos = nag[0].pos[:].cpu().numpy()
    super_index = nag[0].super_index.cpu().numpy()
    labels = lab[super_index]
    data = np.concatenate((pos,labels.reshape(-1,1)),axis=1)

    np.savetxt("xyz.txt",data)

input : 500000, output: 330453

@IsraelAbebe ok. Are you experimenting on the kitti360 dataset? I might know the cause of the problem. Let me verify it

import os
import sys

# Add the project's files to the python path
file_path = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))  # for .py script
# file_path = os.path.dirname(os.path.abspath(''))  # for .ipynb notebook
sys.path.append(file_path)

# Necessary for advanced config parsing with hydra and omegaconf
from omegaconf import OmegaConf
OmegaConf.register_new_resolver("eval", eval)

import hydra
from src.utils import init_config
import torch
from src.visualization import show
from src.datasets.kitti360 import CLASS_NAMES, CLASS_COLORS, read_kitti360_window
from src.datasets.kitti360 import KITTI360_NUM_CLASSES as NUM_CLASSES
from src.transforms import *

# Parse the configs using hydra
cfg = init_config(overrides=[
    "experiment=kitti360",
    "ckpt_path=./media/spt-2_kitti360.ckpt"
])

# Instantiate the datamodule
datamodule = hydra.utils.instantiate(cfg.datamodule)
print(f"Data transorms : {datamodule}")

path = "/media/shi/projects/superpoint_transformer/data/kitti360/raw/data_3d_semantics/2013_05_28_drive_0008_sync/static/0000002769_0000003002.ply"
data = read_kitti360_window(path)
print(f"number of points: {data.num_points}\n keys: {data.keys}")

# Apply pre-transforms
nag = datamodule.pre_transform(data)
# Simulate the behavior of the dataset's I/O behavior with only
# `point_load_keys` and `segment_load_keys` loaded from disk
from src.transforms import NAGRemoveKeys
nag = NAGRemoveKeys(level=0, keys=[k for k in nag[0].keys if k not in cfg.datamodule.point_load_keys])(nag)
nag = NAGRemoveKeys(level='1+', keys=[k for k in nag[1].keys if k not in cfg.datamodule.segment_load_keys])(nag)

# Move to device
nag = nag.cuda()

# Apply on-device transforms
nag = datamodule.on_device_test_transform(nag)

# Instantiate the model
model = hydra.utils.instantiate(cfg.model)

# Load pretrained weights from a checkpoint file
model = model._load_from_checkpoint(cfg.ckpt_path)
model = model.eval().cuda()

# Inference
logits = model(nag)

# If the model outputs multi-stage predictions, we take the first one, 
# corresponding to level-1 predictions 
if model.multi_stage_loss:
    logits = logits[0]

# Compute the level-0 (pointwise) predictions based on the predictions
# on level-1 superpoints
l1_preds = torch.argmax(logits, dim=1).detach()
l0_preds = l1_preds[nag[0].super_index]

print(f"number of pred: {l0_preds.shape[0]}")

# Save predictions for visualization in the level-0 Data attributes 
nag[0].pred = l0_preds

@IsraelAbebe I created an example according to your code. As I said, you read the point cloud directly from the original file and generate the nag file through the pre_transform process so that it can be fed into spt. But there is a key problem here. Pre_transform will perform voxel sampling, and the voxel size is set in datamodule/kitti360.yaml. This will make the number of point clouds formed by nag less than the original data. I think this is the key to your problem. If you want to get predicted semantic labels for the full-resolution point cloud, you should set voxel to 0 in datamodule/kitti360.yaml or comment out the GridSampling3D component. Since I don't know the details of the "instantiate_datamodule_transforms" function you defined, I made some modifications to implement your example. The process should be the same, right?

IsraelAbebe commented 1 month ago

datamodule

thank you works