We are trying to train the RandLANet model with a custom dataset to perform 3D semantic segmentation.
For that we are using the Custom3D dataset which, from our understanding, is the class to use if we need our own dataset.
We created a folder for our dataset with three folders inside, like expected by the Custom3D dataset class ("Expect point clouds to be in npy format with train, val and test files in separate folders.").
To each of this folders we saved our training, validation and test files, in the expected format (x, y, z, class, feat_1, feat_2, ..., feat_n).
We also had to override the get_label_to_names static method in the Custom3D class so that it returned our own label mappings, instead of the ones that were implemented.
We managed to train the model on the custom dataset using this class, using the run_train function of the pipeline, but when we tried evaluating the model performance (run_test function of the pipeline), it threw the error described below.
Could you help out with this issue?
Also, is this the correct approach to train models on custom datasets in Open3D?
File .venv/lib/python3.10/site-packages/open3d/_ml3d/datasets/customdataset.py:223, in Custom3D.save_test_result(self, results, attr)
220 make_dir(path)
222 pred = results['predict_labels']
--> 223 pred = np.array(self.label_to_names[pred])
225 store_path = join(path, name + '.npy')
226 np.save(store_path, pred)
TypeError: unhashable type: 'numpy.ndarray'
Expected behavior
We would expect that the save_test_result function from the Custom3D dataset didn't fail by trying to use a numpy array to access the label_to_names dict by key.
Open3D, Python and System information
- Operating system: macOS Ventura 13.4.1
- Python version: 3.10.12
- Open3D version: 0.17.0
- System type: x64
- Is this remote workstation?: no
- How did you install Open3D?: pip
Checklist
master
branch).Describe the issue
Hello,
First of all, thank you for the awesome library.
We are trying to train the RandLANet model with a custom dataset to perform 3D semantic segmentation. For that we are using the Custom3D dataset which, from our understanding, is the class to use if we need our own dataset.
We created a folder for our dataset with three folders inside, like expected by the Custom3D dataset class ("Expect point clouds to be in npy format with train, val and test files in separate folders.").
To each of this folders we saved our training, validation and test files, in the expected format (x, y, z, class, feat_1, feat_2, ..., feat_n).
We also had to override the get_label_to_names static method in the Custom3D class so that it returned our own label mappings, instead of the ones that were implemented.
We managed to train the model on the custom dataset using this class, using the run_train function of the pipeline, but when we tried evaluating the model performance (run_test function of the pipeline), it threw the error described below.
Could you help out with this issue?
Also, is this the correct approach to train models on custom datasets in Open3D?
Thanks for your help.
Steps to reproduce the bug
Error message
File .venv/lib/python3.10/site-packages/open3d/_ml3d/datasets/customdataset.py:223, in Custom3D.save_test_result(self, results, attr) 220 make_dir(path) 222 pred = results['predict_labels'] --> 223 pred = np.array(self.label_to_names[pred]) 225 store_path = join(path, name + '.npy') 226 np.save(store_path, pred)
TypeError: unhashable type: 'numpy.ndarray'
Expected behavior
We would expect that the save_test_result function from the Custom3D dataset didn't fail by trying to use a numpy array to access the label_to_names dict by key.
Open3D, Python and System information
Additional information
No response