jcjohnson / neural-style

Torch implementation of neural style algorithm
MIT License
18.31k stars 2.7k forks source link

How to run batch #282

Open MixMe opened 8 years ago

MixMe commented 8 years ago

i have 20 images, how can i batch them in the same style ?

ttoinou commented 8 years ago

Hi, try to write a python script or a shell script

zyntax9999 commented 8 years ago

Anyone who can give a example of a script for a batch run in Neural Style?

jbgrant01 commented 8 years ago

I use a bash script to process several images. In this script you need to name the images image-1.jpg, image-2.jpg and so on. You will need to specify in the script where the images are kept and where you want the output to go. You may also want to change the other options to suit your needs. Hopefully it will be clear how to edit the script to fit you needs. Place the script in your neural-style directory and call it by entering "bash Multiple_Images.sh" from there.

!/bin/bash

a="image-" b=".jpg" c="out-" for i in $(seq 1 20)

do inputfile=$a$i$b outputfile=$c$i$b

th neural_style.lua -gpu 0 -backend cudnn -cudnn_autotune -optimizer adam -style_scale .5 -style_image project/input/Van_Gogh_Grey_Hat01.jpg -content_image YOUR_PATH/$inputfile -image_size 640 -num_iterations 100 -init image -style_weight 200 -content_weight 80 -normalize_gradients -output_image YOUR_PATH/$outputfile echo $outputfile done

jbgrant01 commented 8 years ago

Generally I use a bash file to run neural-style. With my fumble fingers I find trying to specify all my parameters on the command line is difficult. I use something like this:

!/bin/bash

th neural_style.lua -gpu 0 -backend cudnn -cudnn_autotune -optimizer adam -style_scale .5 -style_image project/input/Van_Gogh_Grey_Hat01.jpg -content_image project/input/Chutes.jpg -image_size 640 -num_iterations 400 -init image -style_weight 200 -content_weight 80 -normalize_gradients -output_image project/output/Chutes.jpg

If I want to run multiple sessions I just copy and paste the "th neural-style.lua . . . " section multiple times and adjust the parameters. That is handy if you want to try different style/content weights (or other options) or you are processing several content and style images.

zyntax9999 commented 8 years ago

Thanks, now i understand it ( i think) I did some modifications on it:

!/bin/bash

change settings under to your preferences and computing power

a="IMG" b=".jpg" c="IMG" d=".png" e="lbfgs" f="style.jpg" g="1300" for i in $(seq 1 20)

do inputfile=$a$i$b outputfile=$c$i$d optimizer=$e style=$f size=$g

th neural_style.lua -gpu 0 -backend cudnn -optimizer $optimizer -style_image YOUR_PATH/$style -content_image YOUR_PATH/$inputfile -image_size $size -output_image YOUR_PATH/$outputfile echo $outputfile done

^ This should work i guess?

jbgrant01 commented 8 years ago

Looks like it would work.

PitBult commented 8 years ago

Hi! I have a question.

I want to process multiple images, but do not want to load each time anew model "VGG_ILSVRC_19_layers.caffemodel". How can I do this?

[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000

Every time to load model I have spent 5-6 seconds. If you generate 10 images, it is almost 1 minute.

neural-style.lua

local loadcaffe_backend = params.backend
  if params.backend == 'clnn' then loadcaffe_backend = 'nn' end
  local cnn = loadcaffe.load(params.proto_file, params.model_file, loadcaffe_backend):float()
  if params.gpu >= 0 then
    if params.backend ~= 'clnn' then
      cnn:cuda()
    else
      cnn:cl()
    end
  end

Thank you!