manuelruder / fast-artistic-videos

Video style transfer using feed-forward networks.
Other
378 stars 47 forks source link

Style Model Training Independent of Optical Flow? #13

Open JeffMusgrave opened 6 years ago

JeffMusgrave commented 6 years ago

What is the fastest way to train a new style? Is it necessary to train an optical flow model on the hollywood dataset, for every new style?

manuelruder commented 6 years ago

You just need to create the hollywood dataset once, and you can use it for every new style you want to train. You can use a pretrained optical flow model, as described in the FlowNet2 repository.

Since you are asking for the "fastest way": you could in theory only use the COCO dataset and then only use the "shift", "zoom_out" and "single_image" as the datasource parameter. Results may be inferior, since the model only learned from camera motion, but maybe sufficient for your use case.

JeffMusgrave commented 6 years ago

By the way, how long did it take you to train the hollywood dataset?

noufpy commented 6 years ago

@jargonfilter I computed the optical flow with the full Hollywood dataset and it took me about 1.5-2 weeks. Produced really high quality results. Machine Specs: Linux (Ubuntu 16.04), GPU 24 GB, RAM 30 GB, CPUS 8

manuelruder commented 6 years ago

I computed the optical flow on our university cluster with multiple jobs in parallel (there is no "training" involved by the way), but I don't remember the exact number of GPU days. Thanks for some reference numbers, noufali.

This is quite a long time. As pointed out in the description, the amount of data can be reduced to one fifth with a simple parameter switch. Concurrent work on video style transfer (Gupta et al., Chen et al.) use smaller datasets, too, thus I don't expect the quality to drop significantly.

bafonso commented 5 years ago

@manuelruder do you mean the num_tuples_per_scene parameter that should be set lower than 5 ? @noufali Any chance that optical flow dataset could be made available for download? I only have a normal 1070 and a i5 8400, god knows how long that will take :(

manuelruder commented 5 years ago

@bafonso yes, exactly. Can be as low as 1. The script first separates all the video clips by scene, then it ranks every possible tuple in each scene by amount of motion. Then it will take the top num_tuples_per_scene tuples per scene. Reducing this is the most natural way to reduce the dataset size, since this will mostly remove similar looking tuples. The size can be further reduced by deleting random files in "AVIClipsScenes" before executing the scripts. The scripts will dynamically detect the number of video files available.

noufpy commented 5 years ago

@bafonso Hey there! yea I can share mine with you. It's definitely a tedious process. Shoot me your email?

AIaesthetic commented 5 years ago

I'm wondering if you could share that with me also @noufali. my email is bouaadii@gmail.com

That would be a lifesaving

ryanqiutu commented 5 years ago

@noufali Excuse me! I've calculated flow files using deepflow and reliable file using consistencyChecker of the part dataset of Hollywood. But I'm not sure the results are right. I tested the calculated reliable results with the occlusions results of MPI-Sintel datset(in my thought, they should be the same), the results are different. Could you show me some calculated results of flow file and reliable file?

pizboese commented 4 years ago

@noufali Sorry to bother and revive this old thread. I just started to work and explore video style transfers on some consumer hardware. It would take ages for me to compute the optical flow of the hollywood dataset. Would it be possible for you to share yours, in case you still have it. Thanks in advance! (piz@assiclub2010.de)

ryletko commented 4 years ago

@noufali Please could you share it with me too? I'm a newbie in the topic and don't have proper hardware and skills for the computations, but I'm very curious about this. My email is ryletko@gmail.com Thank you!

StoneCypher commented 4 years ago

@noufali - I would also like the optical flow. Would you consider putting it in a repo, so that people don't have to ask you directly?

chen-jimmy commented 4 years ago

@noufali Hi, sorry to bother you but I'm currently working on a style transfer project and this step is taking ages to complete on my hardware. If you could share your optical flow results with me, I would deeply appreciate it. My email is jimmychen@utexas.edu