Closed DavidTorresOcana closed 11 months ago
dgp/utils/dataset_conversion.py
line 47 at r1 (raw file):
Could you add a unit test for this function?
Done.
dgp/utils/dataset_conversion.py
line 81 at r1 (raw file):
Could you add a unit test for this function?
Done.
dgp/utils/dataset_conversion.py
line 116 at r2 (raw file):
Is there a particular reason 20 is chosen as the cutoff for the loop?
It is for safety. But agree that it is not needed. Removed
dgp/utils/dataset_conversion.py
line 121 at r2 (raw file):
Out of curiosity, what is `nx` (it's not in ply files generated with `write_point_cloud`)?
The implemented functionality allows encoding of intensities and timestamps onto the PLY file. If using Open3D to save PLYs, there is no option for timestamps, whereas there is option for color (RGB). Hence, in the past, I used 1st component of vertices' normals to embed time (nx
). As this legacy code and is not required here, I removed it. Apologies and, if you want the feature back, please let me know
save ok, thanks for that!
dgp/utils/dataset_conversion.py
line 121 at r2 (raw file):
I see, thank you for the explanation! I think it's fine to leave off >If using Open3D to save PLYs, there is no option for timestamps, where as there is option for color (RGB). Hence, in the past, I used 1st component of vertices' normals to embed time (nx). Not required, but maybe one workaround for this issue is to let the user specify a custom properties mapping for instances where the ply file is generated outside of dgp. This lets a user call `read_cloud_ply` with the custom mapping and then `write_cloud_ply` to get a new ply file with the expected default dgp property names Ex: ``` def read_cloud_ply(file, remap_properties=None): properties_map = { # property_name: ply_property_name 'x': 'x', 'y': 'y', 'z': 'z', 'intensity': 'intensity', 'timestamp': 'timestamp', } if remap_properties: properties_map.update(remap_properties) ..... intensities_idx = [properties.index(k) for k in [properties_map["intensity"]] if k in properties] ..... read_cloud_ply(open3d_ply_file, remap_properties = {'intensity': 'nx'}) ```
Hello. I do not think that is a good idea. Specially due to types. The case I explained was actually a misuse of PLY format, so it is a good thing our reader does not handle that.
That being said, 2 comments:
@yuta-tsuzuki-woven Is there anything else we should improve in this PR to be merged?
Regards
ok 2 approvals, then ready for merge, no? Is there anything I need to do? I am not used to Reviewable
@DavidTorresOcana I think it's just rebase and pass the pre-merge tests. Seems the linter is failing https://github.com/TRI-ML/dgp/actions/runs/4666965545/jobs/8262166566?pr=145
You can enable the linter to autoformat before pushing a commit by following these steps https://github.com/TRI-ML/dgp/actions/runs/4666965545/jobs/8262166566?pr=145
_Reviewable_ status: :shipit: complete! all files reviewed, all discussions resolved (waiting on @DavidTorresOcana)
I was never able to run that. This is what I get when running that, both inside and outside Docker container:
dgp$ make link-githooks
make: *** No rule to make target 'link-githooks'. Stop.
Could it be that target link-githooks
no longer exists and now it is called setup-linters
?
I used Flake8 to check format of files I modified. Format of my changes seems ok to me.
I am not sure how to make CI pass after commits were made....
@yuta-tsuzuki-woven Is it ok to merge then?
Anything else to do @rachel-lien @yuta-tsuzuki-woven ?
This change is
Currently, lidar related data is stored as Numpy files
This file format, while simple enough, is not compatible with other software like Blender, Meshlab, etc. Also, performance could be improved.
In this Pull Request, PLY file format support is proposed.
This feature, while making lidar data more compatible with other software programs, also makes DGP about x10 times faster to read/write pointclouds from/to disk, at the expense of just ~10% increase in size.
Performance comparison
DGP native
PLY support
@rachel-lien @akira-wakatsuki-woven ping