Open NameRX opened 7 years ago
killing it. thanks!
Hi, First, you've made a great job, this lib + these improvements make it my favorite lib for style transfering on video (excellent results + amazing performances).
I'm having an issue with high resolution images and I think it's coming with this PR changes. (I've cloned @NameRX branch).
With a 1280x720 video + same size style image, the scripts fails at the first iteration 2/2000 with no error message.
...
Setting up temporal consistency.
Setting up style layer 2 : relu1_1
Setting up style layer 7 : relu2_1
Setting up style layer 12 : relu3_1
Setting up style layer 21 : relu4_1
Setting up content layer 23 : relu4_2
Setting up style layer 30 : relu5_1
Detected 105 content images.
Running optimization with L-BFGS
<optim.lbfgs> creating recyclable direction/step/history buffers
<optim.lbfgs> function value changing less than tolX
Running time: 2s
Iteration 2 / 2000
Content 1 loss: 0.000000
Style 1 loss: 515876.342773
Style 2 loss: 91595007.812500
Style 3 loss: 52583910.156250
Style 4 loss: 1642812250.000000
Style 5 loss: 125239.051819
Total loss: 1787632283.363342
...
It renders a out-0001.png file which is clean, it has no style transfer, it's the same than the source. I've tried with adam optimizer and it seems to fix the issue but the result is not as good. Also, the problem is not happening with the main branch on manuelruder repo.
I have the latest CUDA and Cudnn correctly installed on my Ubuntu 16.04 64bits, a Titan X, 30Gb RAM, and a i7@3.40GHzX12 so I don't think handling this resolution should be a problem.
Let me know if you think of a something obvious, I'll be happy to help. I keep digging and I'll tell you if I find something interesting. I guess there is something around waitForFile functions when one of the scripts requires more time to initialize.
FYI, removing the init
argument on stylizeVideo.sh
makes it work again.
@martync i had the same bug, try decrease style weight to lower values, this may help. this is not waitForFile bug. I think that a combination of input parameters produces this error.
@martync I checked original code and my branch with the same input arguments, and this bug appeared in both cases. That means this is not my bug, people had the same problem with neural-style code.
Thanks for your reply. Good thing that it's not a bug shipped with your PR. I will try changing the style weight.
Your code passes the ffmpeg flag framerate
to avconv
but avconv
requires -r
, not -framerate
.
When avconv is used, it spits out the error Option framerate not found.
@NameRX You mean that decreasing the style weight can process the largher size of video?
@martync Why the figure out-0001.png is not been changed?
@linrio I have no clue, that was the issue
Hey, great work on this branch, loving it! One question - is it possible to queue multiple stylizevideo commands, so that it would automatically process multiple video files in succession?
@Vassay I'm currently working on it but I don't think it should be a part of that code. My approach was to use a lockfile and launch a new instance once the lockfile is free. I'm doing it in python, it's more friendly to me, if you're interested, I could share what I did
@martync I ended up using quick and dirty method of putting || between commands, it seems to work for me. Like this:
./stylizeVideo.sh 27.mov params_27.txt || ./stylizeVideo.sh 22.mov params_22.txt
But I think others might benefit from seeing your approach, it seems to be far more intelligent than mine =)
Well done! I didn't merge because I never found the time to review and test your changes in depth, but there is now a link in the README to your fork.
Here are some useful changes for artistic-videos code.
Some speed-up optimizations in .sh and .lua code. Now optical flow calculations can be safely launched in background, both forward and backward processes are launching separately. As this process runs on CPU, main process of video stylizing now can be running on GPU simultaneously without of waiting for first pre-calculations of optical flow. In case the CPU is running slowly and required files for stylizing are not ready, main process will wait them (can be controlled by -timer parameter for artistic_video.lua in seconds, default 600)
Added _-original_colors_ option like in neural-style, If you set this to 1, then the output image will keep the colors of the content image.
Some changes in stylizeVideo.sh: more options to control in console, background deepflow launching, 'ctrl+c' hotkey kills all background and foreground processes, default values changed to more usable, ffmpeg resolution now can be set only by width (at this moment only ffmpeg is supported, I suppose). Script now more handy, processing can be launched from everywhere by drag'n'droping it and required files into any terminal window like that:
19 jan 2017 update:
New usage available:
Not all arguments are needed, if argument missing, script uses default values. here is example params_file_example.txt:
20 jan 2017 update:
Added _-originalcolors option for artistic_video_multiPass.lua, If you set this to 1, then the output image will keep the colors of the content image.
Also added easy-lo-launch script stylizeVideo_multipass.sh, like stylizeVideo.sh