I'm just a little confused, as far as I know, all the input images need initially to be resized at the same dimension (1024 1024, referred from utils.py) before processed by the framework. So why the time required to process different images with various size differs significantly? For example, I input three different dimensions of the same image (1920 1080, 960 540, and 480 270), the time usage is 1.42s, 0.95s, and 0.57s, respectively. Why is the time usage differs significantly? Don't all images need to be resized at the same dimension initially?
I'm just a little confused, as far as I know, all the input images need initially to be resized at the same dimension (1024 1024, referred from utils.py) before processed by the framework. So why the time required to process different images with various size differs significantly? For example, I input three different dimensions of the same image (1920 1080, 960 540, and 480 270), the time usage is 1.42s, 0.95s, and 0.57s, respectively. Why is the time usage differs significantly? Don't all images need to be resized at the same dimension initially?