Closed Ody-trek closed 4 days ago
In the original AdhocImageDataset
, there is a _preprocess
function that manages image resizing. You might consider setting the self.shape
parameter to define the target dimensions for the images. By doing so, the images in the dataset will be automatically resized to your specified dimensions within the _preprocess function, ensuring they meet your size requirements.
For example:
inference_dataset = AdhocImageDataset(
[os.path.join(input_dir, img_name) for img_name in image_names],shape=args.shape
)
If you’re working with a personal dataset format, you could implement a similar function to handle resizing. This involves adapting the _preprocess
logic to fit the structure and requirements of your dataset, ensuring consistent image dimensions throughout your workflow.
@FrankWuuu thank you for answering the question.
@Ody-trek yes, this is by design. We prioritized batch processing over variable size image processing. Setting batch size to 1 in your case will resolve the issue.
Hello,
I encountered a problem while running the
lite/scripts/demo/torchscript/pose_keypoints17.sh
script. When I use a folder containing images of different sizes as input, I get the following runtime error during inference:RuntimeError: Trying to resize storage that is not resizable
However, when I use a folder containing only one image or images that are all the same size, the inference runs successfully.My questions: How should I handle input folders containing images of different sizes for inference?
Error details: The complete error message is as follows:
Thank you in advance for your help!