Open ayushjain-ow opened 1 year ago
hi @ayushjain-ow ,
I think you're right! Would you like to make a pull request to fix this issue? I think we can solve it in two ways.
First, we can remove LoadImageFromFile
and LoadMask
from the infer_pipeline_cfg
if the type of the img
and mask
are np.array
. In a word, we can check their type to set the infer_pipeline_cfg
Second, when you input by image filename, you need to make sure the input image size can be divided by 4
. This is because the global_local
will downsample the input image by strided convolution and then upsample it by 4. Your example image has shape (662, 1000, 3)
, therefore the output size will be ceil(662/4)*4=664
. I think we need to add an assertion before inference to remind users this issue.
Looking forward to your pull request!
Prerequisite
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmagic
Environment
sys.platform: linux Python: 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0] CUDA available: False numpy_random_seed: 2022 GCC: x86_64-linux-gnu-gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 2.0.1+cu118 PyTorch compiling details: PyTorch built with:
TorchVision: 0.15.2+cu118 OpenCV: 4.7.0 MMEngine: 0.8.2 MMCV: 2.0.1 MMCV Compiler: GCC 9.3 MMCV CUDA Compiler: 11.8 MMagic: 1.0.1+
Reproduces the problem - code sample
Sample Colab Notebook - MMagic-Issue.ipynb
global_local
model for image inpainting. Also tried withpartial_conv
ReferenceMMagicInferencer.infer
As per the method definition and this,np.ndarray
should have been supported.I guess this is due to this piece of code
mmagic.apis.inferencers.inpainting_inferencer.InpaintingInferencer.preprocess
.Reproduces the problem - command or script
NIL
Reproduces the problem - error message
Additional information
I also tried saving the
img
andmask
in local filesystem and pass the filepaths to.infer
function, still encountered an error.Code to reproduce,
Exception traceback