Closed semel1 closed 2 years ago
Yes, it is possible.
Thanks for your kind reply. Is using "model colorization" with 50 epoch the proper way to go? (input - original color video, processed - depth estimation)
You can try 'model colorization' or 'model dehazing'. Since IRT may not be necessary for your task.
Great, thanks again. I'll try as soon as I get my GPU (currently I'm trying to test using CPU only, but it is way too slow)
Is it possible to generate temporally consistent video from depth map sequence obtained by depth estimation (MIDAS) similar to "Blind Video Temporal Consistency project" https://github.com/lulu1315/BlindConsistency or https://github.com/nbonneel/blindconsistency?