serdarch / SERNet-Former

[CVPR 2024 Workshops] SERNet-Former: Semantic Segmentation by Efficient Residual Network with Attention-Boosting Gates and Attention-Fusion Networks
Other
39 stars 3 forks source link

How fast is the model’s inference speed? #4

Open lxy-mini opened 3 weeks ago

serdarch commented 3 weeks ago

Apparently, it depends on the hardware resource. It almost gets more than one minute to learn one Cityscapes image and 20 sec for inference when running on CPU. However, it gets much faster when using GPU resources.

Being developed on the existing baselines, the model is much faster, unless compiled from scratch. Thus, I share a tutorial example using DeepLabV3+ as publicly available and our model, when built on existing nets, has similar training and inference time even if it gets bigger than the existing models.

Please enjoy the colab example that may give you a feeling about the inference time and speed.

lxy-mini commented 3 weeks ago

Thank you very much for your reply. I'm looking for a way to split road lane markings that balances speed and splitting effect. I think your method will be great in terms of effect, but there are practical considerations in terms of inference speed.

serdarch commented 1 day ago

Reducing the size and resolution of frames can help.

Thank you very much for your reply. I'm looking for a way to split road lane markings that balances speed and splitting effect. I think your method will be great in terms of effect, but there are practical considerations in terms of inference speed.