junjiehe96 / FastInst

[CVPR2023] FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation
MIT License
178 stars 16 forks source link

GPU computational resources quantity #6

Closed danhntd closed 1 year ago

danhntd commented 1 year ago

Dear Authors, Thank you for your very interesting work and source code.

Could you please confirm the number of GPUs used in the training process? Whether it is 1x V100 or 4x V100? In the paper, it is indicated that 1x A100 GPU is used to evaluate and infer. But in the source code, 4x GPU is pre-set up in the script.

Many thanks in advance.

junjiehe96 commented 1 year ago

Thank you for your interest in our work. We used 4x V100 GPUs for model training and 1x V100 for model inference speed evaluation.