uzh-rpg / rpg_vid2e

Open source implementation of CVPR 2020 "Video to Events: Recycling Video Dataset for Event Cameras"
GNU General Public License v3.0
329 stars 80 forks source link

Save upsampled images in video format #8

Open etienne87 opened 4 years ago

etienne87 commented 4 years ago

"Why store the upsampling result in images: -Images support random access from a dataloader. A video file, for example, can typically only be accessed sequentally when we try to avoid loading the whole video into RAM."

"Same sequence can be accessed by multiple processes (e.g. PyTorch num_workers > 1). Well established C++ interface to load images. This is useful to generate events on the fly (needed for contrast threshold randomization) in C++ code without loading data in Python first. If there is a need to store the resulting sequences in a different format, raise an issue (feature request) on this GitHub repository."

Why would you need to load the whole video at once in RAM? I think storing your upsampled video in .mp4/.avi would be a huge space saver! For Online generation of events, you can use a pytorch "iterable dataset" (see example here: https://github.com/etienne87/pytorch-streamloader)

Besides, is online generation even possible? right now when loading a folder or images python does not control the callback right so you effectively do load all events in RAM to be accessed on python side via numpy array no?

magehrig commented 4 years ago

In hindsight, it would be smarter to save the upsampled frames in compressed, lossless video format to save some space. This is because you can just get the frames with a ffmpeg-oneliner, if required.

If you do not require random access of the sequence (e.g. using IterableDataset in PyTorch), then of course it makes more sense to keep it as video file.

Given the upsampled images/video you can use the python bindings to generate events online. In that case, you would pass a sequence of images to the event generator using the function generateFromStampedImageSequence.

etienne87 commented 4 years ago

Ok thanks for the tip, will try the generateFromStampedImageSequence. Actually i might be wrong, storing in video format (unless uncompressed) could introduces artefacts of block matching (mp4v) or change of key-frame (h264), so videos can only be compressed so much?

magehrig commented 4 years ago

Yes, you have to encode in a lossless manner. You can still use lossless compression to decrease the overall file size. But I would not use lossy compression which will introduce artifacts as you mentioned.

danielgehrig18 commented 3 years ago

@etienne87 Have you solved this problem? If yes, consider making a PR.

etienne87 commented 3 years ago

hi @danielgehrig18, i have done a sort of rip-off of super-slowmo where i use scikit-video with a "crf" compression factor option to save the videos: https://github.com/prophesee-ai/Super-SloMo/blob/main/slowmo/async_slomo.py. The best is to use crf=0 i think.

etienne87 commented 3 years ago

by the way, i am pushing to put an open-source simulator as well (with cuda acceleration done in cuda similar to what you do with esim_torch). Maybe we can merge everything in the end? features would be:

danielgehrig18 commented 3 years ago

Very cool! Yes It would be great if we could merge these works in the end. Let me know when you are ready to test and then we can provide a more realistic model as a kernel.