Open hhhhsmx opened 2 years ago
You have to modify the base directory at the top of the script to match your file structure. Mine is set up like this:
This should help with the downloading. The next issue is prepare_semantic3d.py
under the utils folder. It has some python errors that I am still trying to solve. Good luck! Let me know if you get it working with Semantic3d data.
我现在是数据预处理完成了然后打开预处理后的.npy文件发现结果出现了non 。下采样的时候显示3_office_9.npy: Loaded data of shape : (603754, 2)
Traceback (most recent call last):
File "/home/mnt_partation/for_all/smx_part/RandLA-Net-pytorch/utils/subsample_data.py", line 47, in
我现在是数据预处理完成了然后打开预处理后的.npy文件发现结果出现了non 。下采样的时候显示
Just to clarify, I am not the dev of this repo. I am also just using google translate since I don't read chinese.
It looks like your prepare_semantic.py is not running properly. The error is outlining that points.shape is size Nx3. Which makes sense since you also mentioned (again assuming the translation is correct) that your generated .npy files are empty. This would mean its trying to read from an empty file.
Unfortunately there isn't much other help I can give. I would need more info on how your setting up yout prepare_semantic3d.py script. I would suggest using the s3dis dataset instead, the setup is much more straight forward. I was able to get mine running but the data preprocessing steps took almost a full day to run.
Here's a list of things that need to be modified for the semantic3d dataset:
del points
after each loop to save RAMclass_weights
needs to be changed to begin with 9 weights (0-8) instead of 14Hope you figure things out :technologist:
我现在是数据预处理完成了然后打开预处理后的.npy文件发现结果出现了non 。下采样的时候显示
Just to clarify, I am not the dev of this repo. I am also just using google translate since I don't read chinese.
It looks like your prepare_semantic.py is not running properly. The error is outlining that points.shape is size Nx3. Which makes sense since you also mentioned (again assuming the translation is correct) that your generated .npy files are empty. This would mean its trying to read from an empty file.
Unfortunately there isn't much other help I can give. I would need more info on how your setting up yout prepare_semantic3d.py script. I would suggest using the s3dis dataset instead, the setup is much more straight forward. I was able to get mine running but the data preprocessing steps took almost a full day to run.
Here's a list of things that need to be modified for the semantic3d dataset:
- prepare_semantic.py: you should add a
del points
after each loop to save RAM- subsample_data.py: This can't handle files over 13GB for some reason (may be my system, but I have 48GB of RAM, that should be enough)
- classes.json: This has to be created/modified to be used with semantic3d
- tools.py:
class_weights
needs to be changed to begin with 9 weights (0-8) instead of 14Hope you figure things out 🧑💻
Great! For Semantic3D dataset, what is the detailed parameter for class_weights? Looking for your reply.
我在打开cd RandLA-Net-pytorch/utils./download_semantic3d.sh的时候一直显示-bash:cd/.sh文件或目录不存在,请问这是什么问题呢