Open aaaa916 opened 2 weeks ago
Thank you for your interest. First of all, the TopoLogic paper trains with 8 GPUs and each GPU batch size of 2 to save training time. In the code, each GPU batch size is 1. The training results of these two settings are basically the same. Secondly, if your results differ too much from results in paper, it may be that your mmde3d environment is inconsistent, you can check you environment. Finally, the lane topology is sent to the GNN to enhance lane learning.
thanks a lot
Hello, while reproducing the code, I noticed in section 4.2 "Implementation Details" of your paper, under the training section, it mentions: “All experiments are trained for 24 epochs on 8 NVIDIA RTX 3090 GPUs with a batch size of 2.” I understand this to mean the batch size is 8*2. However, in the code, there is a mention of base_batch_size=8 when adjusting the learning rate based on the number of GPUs. Currently, I am training on 2 RTX 3090 GPUs, setting the batch size to 1 and 2, and training twice. However, the results still differ from yours. So, I would like to ask how exactly the batch size should be set here, and whether it is possible to achieve the same results with fewer GPUs. The best result I have reproduced so far is shown in the figure. I am using 3 GPUs, with samples_per_gpu=1, so the batch size is 3. All other configuration parameters are the same, and the learning rate has been adjusted according to the number of GPUs. However, I have a question after reviewing the code: Does the topology result serve as input to each decoder layer in the detection part? The reason I ask is that the detection performance does not show improvement compared to previous methods. Thank you!