Closed jbgrant01 closed 8 years ago
200 seconds per frame sounds reasonable. We are basically applying Justin Johnson's neural-style code for each frame, with an additional temporal consistency constraint. If you want to trade quality for performance, you could play around with the -tol_loss_relative
switch. Setting this to a higher value will stop the optimization process earlier.
One could try to apply this temporal consistency constraint to Chuan Li's MRF code. But this looks to me as a non-trivial task, because both Chuan Li and I have rewritten Justin Johnson's source code quite a lot.
I don't have the expertise as you, Chuan Li or Justin Johnson, but I may take a shot at it. I think Chuan's code runs a bit faster than Justin's in the little time I have played with the CNN's. It may speed things up and provide pleasing videos.
First off, this is incredible work. Your videos are amazing. I recognize that I don't have the best GPU. Since beginning the output files it is taking about 200 seconds per frame. At this rate it will take a little over 25 hours to render an 18 second clip of 450 frames. Does this sound reasonable?
As a hobbyist I have been exploring using Justin Johnson's neural-style as well as Chuan Li's convolutional neural net with Markov Random Fields. I find the latter is easier for me to get nice results. Are you aware of any branch of your programs that use cnnmrf?
I know that it would not be a trivial endeavor to modify your programs. If I were to attempt it, could you point me to which files and functions I would need to look at?
Thank you for your time.