tgxs002 / CORA

A DETR-style framework for open-vocabulary detection (OVD). CVPR 2023
Apache License 2.0
174 stars 16 forks source link

Overall training process #9

Open yuki1ssad opened 1 year ago

yuki1ssad commented 1 year ago

Thank you for your excellent work. I am a little confused about the overall training process of CORA. Could you please describe the overall training process? Thank you very much!

QHCV commented 1 year ago

Thank you for your excellent work. I am a little confused about the overall training process of CORA. Could you please describe the overall training process? Thank you very much!

Hello, you know how to train now, could you share your training experience (including the environment configuration of the operation), thank you very much!

wusize commented 1 year ago

Hi! Has anyone reproduced the results?

eternaldolphin commented 1 year ago

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3

Welcome to more discussion with my wechat Maytherebe

kinredon commented 1 year ago

@eternaldolphin Hi, which GPU devices do you use to reproduce the results?

eternaldolphin commented 1 year ago

@eternaldolphin Hi, which GPU devices do you use to reproduce the results?

4*40G

shaniaos commented 1 year ago

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3

Welcome to more discussion with my wechat Maytherebe

That's wonderful! I would like to ask if it is possible for u to kindly open-source your implementation on github so that other people can learn from it and reproduce the results of CORA. I hope I'm not being rude. Thanks.

ysysys666 commented 2 months ago

Excuse me, what is your batchsize setting? I used 4 × 48G, batchsize is set to 4, but CUDA out of memory. @eternaldolphin

eternaldolphin commented 2 months ago

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3 Welcome to more discussion with my wechat Maytherebe

That's wonderful! I would like to ask if it is possible for u to kindly open-source your implementation on github so that other people can learn from it and reproduce the results of CORA. I hope I'm not being rude. Thanks.

sorry for missing the information, you can refer to this https://github.com/eternaldolphin/cora-dev. And welcome to use LaMI-DETR as baseline, which can train ov-lvis in one day with 8x40G A100 or in two days with 8x32G V100