Open moonyy929 opened 9 months ago
Hi, I don't quite understand your question, can you provide more details, or a screenshot of the code reporting the error.
After using the following command: "python src/datascripts/dataloader_nuscenes.py --DATAROOT path/to/nuScenes/root/directory --STOREDIR path/to/directory/with/preprocessed/data" to preprocess nuScene data, the file ex_list was generated. As shown in the following picture.
And I noticed that, while the code is iterating over ex_list, there are indices with a dictionary type. This is the error message I encountered. It looks like it is using dataset_argoverse.
Sorry, I haven't come across this before, I guess you could debug to see what the idx is
I had iterated through each index and printed them as shown in the figure. Starting from the 32nd index, it becomes a dictionary. I'm wondering if there is an issue with the part of preprocessing , resulting in the generation of such data (ex_list).
Hi,Did you change the code, the original code doesn't have this problem.
I went through my code, and the only modification I made was to replace the part of your code that uses DistributedDataParallel. This is because my environment only supports a single GPU. Or I should do the NuScene data preprocessing again?
Yes, I think you can do the NuScene data preprocessing again?
Hello! I have some questions regarding your model. I have already performed preprocessing on nuScene and generated the file ex_list. However, when preparing to train the model, I noticed that while iterating over ex_list, there are indices with a type of dict. I also observed that the code of train.py, the "Dataset" inside the "def distributed_run()" function is using dataset_argoverse. I would like to ask what could be the issue here?
Thank you!