Closed RuixiangZhao closed 1 year ago
Thank you for your interest in our work, we apologize for not providing detailed settings of calculating the time
of #Para
metric introduce in out paper.
As for calculating #Para
, pytorch actually provide a simple way to calculate the number of trainable parameters in the model.
# count the number of trainable parameters
num_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
As for calculating time
, i.e. the time cost for recovering a trajectory, we commented out the loading of training and validation sets and for testing sets, we use batch_size=1 and num_workers=8 so as that reduce the time costs for preparing data (since we believe that with multiple CPUs, the time cost for preparing data can be ignored, and we mainly calculate the GPU time costs for recovering a trajectory.)
test_iterator = torch.utils.data.DataLoader(test_dataset, batch_size=1,
shuffle=False, collate_fn=lambda x: collate_fn(x),
num_workers=8, pin_memory=False)
Besides, the time cost for evaluating a trajectory is also not considered in the time cost. Thus, we add a continue
in Line 182 in multi_train.py
so that cal_id_acc_batch
and cal_rn_dis_loss_batch
is not calculated.
Finally, we run the multi_main.py
with only testing parse, calculate the total time for recovering all the trajectories in the testing sets and divide the result by the number of trajectories in the testing sets to get the time reported in the paper.
If you have any further doubts or concerns, feel free to discuss with me.
Hi, much thanks for your nice paper and the well-organized open-resourced codebase.
I have some doubts about the
time
and#Para
metric in Fig. 6 of your paper:Looking forward to your reply.