Open ShuangMa156 opened 1 week ago
Yes, these sharp images are used to synthesize the blurry frames. Specifically, we take 7 sharp images from the original REDS dataset, and then interpolate 7 new frames between consecutive sharp images with RIFE. The interpolated sequence contains 49 frames in total, and we average them to generate a blurry frame. Thus, each blurry frame (normal blur ones) corresponds to 7 sharp images from the original REDS dataset. You can find the blurry-sharp correspondence by comparing timestamps in the npz files. For example, you can find the sharp images used to generate the first blurry frame by conducting data['exp_start1']
<=data['sharp_timestamps']
<=data['exp_end1']
(specific data format is explained here).
Thank you for your reply, I have understand the whole process of synthetic blurry image generation.
Hello, I have a question about the blurry image in synthetic dataset Ev-REDS. In the paper, you had written that "For each sequence, we generate high frame-rate videos by interpolating7 images between consecutive frames using RIFE [11], and then synthesize blurry frames by averaging 49 sharp images of the high frame-rate videos." Are sharp images from the .npz file in the Ev-REDS dataset used to generate blurry images?