NVlabs / ScePT

Code for the CVPR 2022 paper "ScePT: Scene-consistent, Policy-based Trajectory Predictions for Planning" by Yuxiao Chen, Boris Ivanovic, and Marco Pavone
Other
81 stars 17 forks source link

File not found And Running evaluate.py error #6

Closed xiaoyaocoding closed 1 year ago

xiaoyaocoding commented 1 year ago

when i run python process_data.py

Processed 8 scenes Traceback (most recent call last): File "process_data.py", line 756, in process_data(args.data, args.version, args.output_path, args.num_workers) File "process_data.py", line 734, in process_data with open(data_dict_path, "wb") as f: FileNotFoundError: [Errno 2] No such file or directory: '../processed/nuScenes_mini_train.pkl'

chenyx09 commented 1 year ago

I think you need to create the processed folder?

On Tue, Mar 7, 2023 at 5:46 PM xiaoyao @.***> wrote:

when i run python process_data.py

Processed 8 scenes Traceback (most recent call last): File "process_data.py", line 756, in process_data(args.data, args.version, args.output_path, args.num_workers) File "process_data.py", line 734, in process_data with open(data_dict_path, "wb") as f: FileNotFoundError: [Errno 2] No such file or directory: '../processed/nuScenes_mini_train.pkl'

— Reply to this email directly, view it on GitHub https://github.com/NVlabs/ScePT/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/AC5LEZTK4JATG3RYWIOBMDLW27QFJANCNFSM6AAAAAAVTGIR64 . You are receiving this because you are subscribed to this thread.Message ID: @.***>

xiaoyaocoding commented 1 year ago

Thank you for your answer! This question has been solved. But when I run evaluate.py I found that something was wrong. Can you help me see again? Thank you image

chenyx09 commented 1 year ago

This looks like some config mismatch, you need to make sure that the config for the trained model is consistent with the model config that you use when loading the model. I haven't seen this error before and I'm afraid you'll need to dig deeper yourself.

xiaoyaocoding commented 1 year ago

Thank you for your answer! I checked the config.json file used and found no problem, I would like to provide more detailed information and would like your help to take another look. I used the pkl file generated by the mini dataset to run evaluate.py. Among them, config.json is as follows.

{
"grad_clip":1.0,
"adj_radius":{
"PEDESTRIAN":{
"PEDESTRIAN":3.0,
"VEHICLE":5.0
},
"VEHICLE":{
"VEHICLE":20.0,
"PEDESTRIAN":5.0
}
},
"learning_rate_style":"exp",
"learning_rate":0.0015,
"min_learning_rate":0.0002,
"learning_decay_rate":0.9999,
"use_lane_info":true,
"use_lane_dec":true,
"use_scaler":false,
"pred_num_samples":4,
"eval_num_samples":10,
"prediction_horizon":8,
"minimum_history_length":1,
"maximum_history_length":7,
"safety_horizon":10,
"log_pi_clamp":-10.0,
"map_encoder":{
"VEHICLE":{
"heading_state_index":3,
"patch_size":[
50,
10,
50,
90
],
"map_channels":3,
"hidden_channels":[
10,
20,
10,
1
],
"output_size":32,
"masks":[
5,
5,
5,
3
],
"strides":[
2,
2,
1,
1
],
"dropout":0.5
}
},
"k":1,
"k_eval":25,
"kl_min":0.07,
"kl_weight":100.0,
"kl_weight_start":1,
"kl_crossover":400,
"kl_sigmoid_divisor":4,
"gamma_init":0.2,
"gamma_end":0.9,
"gamma_crossover":3000,
"gamma_sigmoid_divisor":3,
"col_weight":0.0,
"col_weight_start":0.0,
"col_crossover":100,
"col_sigmoid_divisor":4,
"ref_match_weight_init":0.3,
"ref_match_weight_final":0.2,
"ref_match_weight_decay_rate":0.997,
"max_clique_size":4,
"rnn_kwargs":{
"dropout_keep_prob":0.75
},
"MLP_dropout_keep_prob":0.9,
"enc_rnn_dim_edge":32,
"enc_rnn_dim_history":32,
"enc_rnn_dim_future":32,
"dec_rnn_dim":128,
"RNN_proj_hidden_dim":[
64
],
"edge_encoding_dim":32,
"log_p_yt_xz_max":6,
"K":4,
"use_z_logit_clipping":true,
"z_logit_clip_start":0.05,
"z_logit_clip_final":5.0,
"z_logit_clip_crossover":300,
"z_logit_clip_divisor":5,
"incl_robot_node":true,
"score_net_hidden_dim":[
32
],
"obs_enc_dim":32,
"obs_net_internal_dim":16,
"policy_obs_LSTM_hidden_dim":64,
"policy_state_LSTM_hidden_dim":64,
"policy_FC_hidden_dim":[
128,
64
],
"max_greedy_sample":10,
"max_random_sample":10,
"edge_pre_enc_net":{
"PEDESTRIAN":{
"PEDESTRIAN":"PED_PED_encode",
"VEHICLE":"PED_VEH_encode"
},
"VEHICLE":{
"PEDESTRIAN":"VEH_PED_encode",
"VEHICLE":"VEH_VEH_encode"
}
},
"rel_state_fun":{
"PEDESTRIAN":"PED_rel_state",
"VEHICLE":"VEH_rel_state"
},
"node_pre_encode_net":{
"PEDESTRIAN":{
"module":"PED_pre_encode",
"enc_dim":32
},
"VEHICLE":{
"module":"VEH_pre_encode",
"enc_dim":32
}
},
"collision_fun":{
"PEDESTRIAN":{
"PEDESTRIAN":{
"func":"PED_PED_collision",
"L":6,
"W":4,
"alpha":2
},
"VEHICLE":{
"func":"PED_VEH_collision",
"L":6,
"W":4,
"alpha":5
}
},
"VEHICLE":{
"PEDESTRIAN":{
"func":"VEH_PED_collision",
"L":6,
"W":4,
"alpha":5
},
"VEHICLE":{
"func":"VEH_VEH_collision",
"L":6,
"W":4,
"alpha":5
}
}
},
"dynamic":{
"PEDESTRIAN":{
"name":"DoubleIntegrator",
"distribution":true,
"limits":[
5,
5
],
"input_dim":2,
"default_con":"PED_no_control",
"state_dim":4
},
"VEHICLE":{
"name":"Unicycle",
"input_dim":2,
"state_dim":4,
"distribution":true,
"default_con":"VEH_no_control",
"limits":[
5,
1
]
}
},
"default_size":{
"PEDESTRIAN":[
0.5,
0.5
],
"VEHICLE":[
6,
3
]
},
"state":{
"PEDESTRIAN":{
"position":[
"x",
"y"
],
"velocity":[
"x",
"y"
]
},
"VEHICLE":{
"position":[
"x",
"y"
],
"velocity":[
"norm"
],
"heading":[
"\u00b0"
]
}
},
"lane_info":{
"VEHICLE":{
"lane":[
"delta_y",
"delta_heading"
]
}
},
"pred_state":{
"PEDESTRIAN":{
"position":[
"x",
"y"
],
"velocity":[
"x",
"y"
]
},
"VEHICLE":{
"position":[
"x",
"y"
],
"velocity":[
"norm"
],
"heading":[
"\u00b0"
]
}
},
"log_histograms":false,
"dynamic_edges":"yes",
"batch_size":50,
"offline_scene_graph":"yes",
"edge_encoding":true,
"use_map_encoding":true
}

Then the method I run evaluate.py code is

python evaluate.py --eval_data_dict=nuScenes_mini_val.pkl --iter_num=10 --log_dir=../experiments/nuScenes/models/ --trained_model_dir=14_Mar_2023_20_38_32 --eval_task=eval_statistics

My directory structure is image

Thank you Yuxiao for your help!

chenyx09 commented 1 year ago

I’m afraid that I don’t have time to debug for you. But in general, if you cannot load the checkpoint, that means your model structure is different from the trained model.

xiaoyaocoding commented 1 year ago

Thank you for your reply. I tried to debug once: when use_lane_info is true. I found that shape(x)=[8,7,4]. But the program implies that x needs 6 features in the last demension. And after this cat operation, shape(input)=[8,7,5], but in my ideal shape(input)=[8,7,8]. I think this may be the cause of the above problem. Where do you think I should change it? thank you very much image

xiaoyaocoding commented 1 year ago

The problem has been solved. I set use_map_encoding=true when I trained the model, but did not specify map_encoding when I typed the command line to run the program. The default argument map_encoding=False added to the argument_parser.py file causes this problem.I modified the argument_parser file as follows: image Thank you chenyx for your remind!