Open glennamarshall opened 6 years ago
Send me an email and I can try to help. cysmith1010 at gmail.
Hi! Trying to tackle this issue too, on Win10. If it's not too much trouble, may I contact you with some questions? Thanks in advance. I noticed that deepflow2 only supports Linux, which seems to make video transfer on Windows impossible.
if you use the colab version you might get it to work.. https://colab.research.google.com/drive/18w-b-ntmA8Y5u7JYLrwPqFAIefj4ZFH5
Thanks. The colab works great for images but has an inactivity limitation. Probably to prevent bitcoin mining, but also prevents us from using it for long render tasks unfortunately XD
i've been using it to batch render animation frames with success. Make sure you keep the browser open - you can get up to 12 hours connection non stop each day.
That's impressive! Will give it a try then.
good luck - i'm about to try it myself .. I know that Cameron is currently working on a better optical flow as well.
🍻 Speaking of which, I wonder how the results would look with a low-fidelity optical flow. If there's some kind of app or library that produces optical flow quickly, that runs on Windows, then it'll be real easy for people to try this out. The results may not look as good, but might have potential for artistic expression...
i know what you mean - i did this video without optical flow all together - but still has 'artistic expression' I think. https://vimeo.com/288168216
Great music and choice of input videos!
thanks!
I had some new findings on how to get video to work, so wanna share them here:
The forward/backward flows and weight files (.flo and .txt) depend on c++ statics that only runs on Linux. While flow files can be generated using Windows OpenCV, I couldn't find Windows replacements for the consistency checker. No bueno for Win10.
That being said, the scripts for generating flows and weights can be run separately from the style transfer itself, and they don't require much setup at all - no need for GPU. We could generate them on Google Colab, or on our own Linux machines. Then, take these files back to Windows, and run the transfer.
the neural_style.py file may also need some adjustment depending on what you're trying to do. Here's my version: https://gist.github.com/lcb931023/bf8b7a7988e17dab7888077c55b997dc
Nice one..
I have a render farm set up to do all the optical flow and styling in Colab. I use multiple google accounts and google cloud storage. I can offer more details on the set up if anyone wants.
That's so clever! How did you achieve it?
For single image setup - I have the call to neural_style.py inside a loop - passing in the frame number.
I have 5 accounts logged in (5 is maximum allowed at same time)
Before each frame is rendered - I write a text file like "FrameRendering_0001.txt" to my google cloud storage bucket. This lets the other renderers know that this is frame is being handled - so it skips and finds the next available frame to render.
For optical flow calculation and proper video rendering - it's not as efficient - as you always need the previous warped frame - so I can't farm render one sequence, but can still do multiple independent sequences at same obviously.
Here's something I just done using all of the above (optical flow & video). https://vimeo.com/290079999
Thanks for sharing that. That's a great render.
How long did the 2 minutes video take on cloud?
Hi Being a noob I'm just about coping with getting this stuff working - I have Windows 10 and managed to get single image transfer working with 'python neural_style.py'. Using bash / ubuntu doesn't activate my GPU.
I really want to get some animation working - but all sorts of errors / problems are stopping me - both using python and bash. I'm asking in advance for help before asking specific questions and detailing problems - as a lot my issues have already been raised here but without response. Thanks.