EdwardLeeLPZ / PowerBEV

POWERBEV, a novel and elegant vision-based end-to-end framework that only consists of 2D convolutional layers to perform perception and forecasting of multiple objects in BEVs.
Other
82 stars 18 forks source link

How can I reproduce the reported results. #9

Closed mingyuShin closed 6 months ago

mingyuShin commented 7 months ago

Hello! I have another question. I trained a model from scratch with a batch size of 8 on a single A100 80GB GPU. I conducted the training twice, but in both instances, the Volumetric Panoptic Quality (VPQ) was lower than the performance reported in the paper. Could you tell me how I can reproduce the results?

VPQ: 30.64(first), 30.29(second)

And how can I train the static model?

image

TAG: 'powerbev'

GPUS: [0]

BATCHSIZE: 8
PRECISION: 16

LIFT:
  # Long
  X_BOUND: [-50.0, 50.0, 0.5]  # Forward
  Y_BOUND: [-50.0, 50.0, 0.5]  # Sides

  # # Short
  # X_BOUND: [-15.0, 15.0, 0.15]  # Forward
  # Y_BOUND: [-15.0, 15.0, 0.15]  # Sides

MODEL:
  BN_MOMENTUM: 0.05

N_WORKERS: 16
VIS_INTERVAL: 100
EdwardLeeLPZ commented 6 months ago

Hello! I have another question. I trained a model from scratch with a batch size of 8 on a single A100 80GB GPU. I conducted the training twice, but in both instances, the Volumetric Panoptic Quality (VPQ) was lower than the performance reported in the paper. Could you tell me how I can reproduce the results?

VPQ: 30.64(first), 30.29(second)

And how can I train the static model?

image

TAG: 'powerbev'

GPUS: [0]

BATCHSIZE: 8
PRECISION: 16

LIFT:
  # Long
  X_BOUND: [-50.0, 50.0, 0.5]  # Forward
  Y_BOUND: [-50.0, 50.0, 0.5]  # Sides

  # # Short
  # X_BOUND: [-15.0, 15.0, 0.15]  # Forward
  # Y_BOUND: [-15.0, 15.0, 0.15]  # Sides

MODEL:
  BN_MOMENTUM: 0.05

N_WORKERS: 16
VIS_INTERVAL: 100

Hi,

Since we did not run PowerBEV on a single A100, it is difficult for us to analyze the specific reasons for the performance changes, especially with respect to distributed training. For static model training, please use powerbev_static.yml from the commit V1.2.

mingyuShin commented 6 months ago

Thank you for your reply.

I have almost reproduced the results using a two-stage training strategy.

pupu-chenyanyan commented 6 months ago

@mingyuShin Hello! I also met the problem. I trained a model from scratch with a batch size of 4 on two A6000 GPUs. The VPQ was only 32.4, which was lower than the performance reported in the paper. How did you solve the problem?
Thanks !!!