Open KaiLong1 opened 1 year ago
Hi, thanks for your interest.
I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part https://github.com/stevewongv/Sparse2Dense/blob/676ba8a1c51ef503da661ddc4294a6f10bc5e7f0/det3d/datasets/pipelines/preprocess.py#L103)
Hi, thanks for your interest.
I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part
)
Hi, does this mean the well-organized code for dense object generation won't be released in the coming weeks? If so, I wonder whether you could perhaps provide some key scripts for us to complete the whole generation by ourselves with less effort. This may save you some time for cleaning the code, meanwhile elinimating more potential confusion about this issue.
Apart from your Sparse2Dense method, I also tried to perform dense object generation with a more direct way: I use WOD 5 sweeps data as input, for these 5 frames, fuse 4 frames to the current frame (use ego pose information to get Trans and Rot matrix to perform the pose transformation). My results with full WOD data training :
The reason I ask for your generation code is that I would like to do experiments on the full WOD dataset with Sparse2Dense to compare the performances of improving the original CenterPoint-Voxel model. Thanks in advance.
Hi, thanks for your interest. I don’t have time to clean the code. But it is not hard to do this following the paper. I only use some API of Det3D, numpy API, and open3D API (for denoise). You can try it and this should be easy to implement. Note that the code of mirror step is implemented in dataloader and the code for how to implement pose transformation is also included in dataloader (preprocessing part https://github.com/stevewongv/Sparse2Dense/blob/676ba8a1c51ef503da661ddc4294a6f10bc5e7f0/det3d/datasets/pipelines/preprocess.py#L103
)
Hi, does this mean the well-organized code for dense object generation won't be released in the coming weeks? If so, I wonder whether you could perhaps provide some key scripts for us to complete the whole generation by ourselves with less effort. This may save you some time for cleaning the code, meanwhile elinimating more potential confusion about this issue.
Apart from your Sparse2Dense method, I also tried to perform dense object generation with a more direct way: I use WOD 5 sweeps data as input, for these 5 frames, fuse 4 frames to the current frame (use ego pose information to get Trans and Rot matrix to perform the pose transformation). My results with full WOD data training :
The reason I ask for your generation code is that I would like to do experiments on the full WOD dataset with Sparse2Dense to compare the performances of improving the original CenterPoint-Voxel model. Thanks in advance.
Hello, I see that you have successfully run the code for sparse2dense. I would like to know which set of Waymo data the author used for their experiments, as the entire Waymo dataset is too large. I have the data from the 'gt' folder provided by the author, but I am unsure which set of Waymo data it corresponds to. Could you please let me know?
Hello, I am very interested in your study Sparse2Dense, especially the idea of generating dense point clouds. I want to learn more about the whole process of dense point cloud. Will you release the code for dense object generation?