when I run caching with part of the full data, it takes too long.
The log showed as this:
INFO {/root/code/nuplan-devkit/nuplan/planning/training/experiments/caching.py:148} Starting dataset caching of 6075752 files...
According to Fact Sheet - nuPlan Dataset v1.0 in https://www.nuscenes.org/nuplan, the scenarios counts 3 millions at most. Why are there 6075752 files to preprocess? I guess the count was magnitude of lidarpcs files, right?
My scripts was like this:
raster model that consumes ego, agents and map raster layers and regresses the ego's trajectory
'scenario_builder=nuplan', # use nuplan trainval database
'scenario_filter.limit_total_scenarios=0.50', # Choose 500 scenarios to train with, int; float should be in (0,1)
])
when I run caching with part of the full data, it takes too long. The log showed as this:
According to Fact Sheet - nuPlan Dataset v1.0 in https://www.nuscenes.org/nuplan, the scenarios counts 3 millions at most. Why are there 6075752 files to preprocess? I guess the count was magnitude of lidarpcs files, right? My scripts was like this:
Did the caching run correctly?