wayveai / fiery

PyTorch code for the paper "FIERY: Future Instance Segmentation in Bird's-Eye view from Surround Monocular Cameras"
https://wayve.ai/blog/fiery-future-instance-prediction-birds-eye-view
MIT License
557 stars 85 forks source link

Reproducing Results on Nuscenes #26

Open kaanakan opened 2 years ago

kaanakan commented 2 years ago

Hi,

We have trained your model with the baseline.yml on 4 V100 gpus but the results we got were slightly worse than the ones you reported on the paper. We had to load the weights from static_lift_splat_setting.ckpt because when we didn't, there was a NaN loss every time.

IOU (short | long) | VPQ (short | long)

58.8 | 35.8 | 50.5 | 29.0

59.4 | 36.7 | 50.2 | 29.9

Can you help us to understand why the results are different? Thanks in advance.

ygjwd12345 commented 2 years ago

I also trained the model with the baseline.yml on 4 V100 gpus , but the evaluation results are always zero. I visualise the samples by our trained models, there is also no output except the 1st epoch. Do you face the same issue?

ygjwd12345 commented 2 years ago

I also check my record, the loss was always NaN too.

kaanakan commented 2 years ago

hi @ygjwd12345,

I think you cannot train FIERY from scratch. Can you try starting from a pre-trained static version? You can follow this issue #8. Can you also report the results that you have found at the end of the training?

ygjwd12345 commented 2 years ago

For now, the result is zero because the loss is nan. I will load pretrained model and report the result as soon as possible.

ygjwd12345 commented 2 years ago

But if we do this, it means fiery actually is multi-step. It is not mentioned in the paper.

kaanakan commented 2 years ago

hi @ygjwd12345, do you have any updates on your training? Thank you.

also @anthonyhu, can you clarify the training scheme? as it is mentioned by @ygjwd12345, the paper does not report anything about pretrained weights. Moreover, is it expected to get a different result with a new training?

Thanks a lot!

ygjwd12345 commented 2 years ago

Hi @kaanakan , Sorry to report late.

  | IoU | VPQ -- | -- | --   | short | long | short | long baseline paper | 59.4 | 36.7 | 50.2 | 29.9 official ck | 59.4 | 36.7 | 50.2 | 29.9 reporduce |   |   |   |   19 | 59 | 36.1 | 50.4 | 29.2 14 | 58.9 | 36.2 | 50 | 28.8 9 | 58.3 | 36.2 | 49.4 | 28.7 4 | 57.2 | 36 | 36.5 | 27.6 lss |   |   |   |   FIERY | - | 38.2 | - | - Reproduce |   |   |   |   19 | 67.2 | 37.7 | 58.6 | 30.6 14 | 66.2 | 37.5 | 56.9 | 28.4 9 | 66 | 38.1 | 56.8 | 29.8 4 | 64.9 | 37.3 | 55.5 | 27.4 static lss | - | 35.8 | - | - official ck | 63.9 | 35.8 | 52.9 | 26.4 19 | 62.8 | 36.0 | 54.3 | 27.0 14 | 64.8 | 36.2 | 54.8 | 27.6 9 | 64.3 | 36.6 | 54.3 | 27 4 | 64 | 36 | 52.9 | 25.9

I reproduce three setting, the result is a little lower than the paper. But it is acceptable.

huangzhengxiang commented 2 years ago

Hi. Thanks for authors' great work and your helpful comment. I ran evaluate.py with official checkpoint but get the output as follows: iou 53.5 & 28.6 pq 39.8 & 18.0 sq 69.4 & 66.3 rq 57.4 & 27.1 Is there something wrong? It seems to be much lower than the results you got.

ygjwd12345 commented 2 years ago

@huangzhengxiang I don't check the author's checkpoint. I only reproduce.

WangzcBruce commented 1 year ago

Hello every one! How can i get VPQ? The code seems to only provide iou sq rq pq.

anthonyhu commented 1 year ago

Hello! What is called "pq` in the metrics corresponds to VPQ :)