lhaippp / RecDiffusion

[CVPR2024]: RecDiffusion: Rectangling for Image Stitching with Diffusion Models
Apache License 2.0
34 stars 2 forks source link

How to generate our own datasets for training and inference stage? #11

Open zhihaoyi opened 2 days ago

zhihaoyi commented 2 days ago

Hi! That was a great work proposed by your team!

I am wondering how I am supposed to generate our own dataset from two individual images to a rectangle stitched image with M_s?

lhaippp commented 2 days ago

Hi,

for the image stiching step, you could try UDIS2, I think you could as well obtain a corresponding M_s. BTW, if you have an already stiching image without M_s, you could consider using this function define_mask_zero_borders to generate one

zhihaoyi commented 23 hours ago

Thank you for your information! I will definitely try it out!

Besides that, I am also wondering how to collect the training dataset. In the training dataset, we have rectangular images as GT and non-rectangular stitched images as inputs, but I am confused how we can obtain those pairs. Do we generate non-rectangular stitched images from GT using manually crafted motion fields?

lhaippp commented 21 hours ago

hi, you could refer to Deep Rectangling for Image Stitching: A Learning Baseline for more detail about creating dataset.