martinbenson / deep-photo-styletransfer

Implementation of "Deep Photo Style Transfer": https://arxiv.org/abs/1703.07511
MIT License
204 stars 27 forks source link

library/deep_photo:latest not found #39

Closed ellcyyang closed 6 years ago

ellcyyang commented 7 years ago

When I rannvidia-docker run -it --name deep_photo deep_photo I got:

Using default tag: latest
Pulling repository docker.io/library/deep_photo
nvidia-docker | 2017/05/12 09:34:48 Error: image library/deep_photo:latest not found
tarvos21 commented 7 years ago

@ellcyyang, do you build the docker image on your computer first? The docker image is supposed to be built by yourself. I built it and it took several hours even on a quite powerful Google cloud machine.

martinbenson commented 7 years ago

Yeah, you need to build the image first.

ellcyyang commented 7 years ago

I guess I did miss something so I rebuild it and got: Sending build context to Docker daemon 4.096 kB Step 1 : FROM nvidia/cuda:8.0-cudnn5-devel-ubuntu14.04 ---> fa6b0c133873 Step 2 : LABEL maintainer "martin@martin-benson.com" ---> Using cache ---> f8fd116f2ef5 Step 3 : RUN apt-get update && apt-get install --assume-yes git libprotobuf-dev libopenblas-dev liblapack-dev protobuf-compiler wget python3-pip ---> Using cache ---> 4c3dfdbd9918 Step 4 : RUN git clone https://github.com/torch/distro.git ~/torch --recursive && cd ~/torch && bash install-deps && ./install.sh ---> Using cache ---> b5ec5b194c41 Step 5 : WORKDIR /root/torch ---> Using cache ---> 302a2d4ea88f Step 6 : SHELL Unknown instruction: SHELL

martinbenson commented 7 years ago

Oh - not sure why the SHELL command doesn't work for you. But, anyway I don't think it's needed and is just something I was experimenting with. So, you can just delete that line and the build will continue. I'll amend the dockerfile in the repo at some point.

ellcyyang commented 7 years ago

I tried to skip nvidia-docker run -it --name deep_photo deep_photo and I was informed that the model didn't work so I ran sh models/download_models.sh It works but there is a new error:

Running th deepmatting_seg.lua -content_image examples/waterfront700.png -style_image examples/city_night700.png -laplacian examples/waterfront700.csv -output_image examples/waterfront_city_night.png -image_size 700 -gpu 0 -content_weight 5 -style_weight 10 -tv_weight 0.001 -num_iterations 2000 -init random -optimizer lbfgs -learning_rate 1 -lbfgs_num_correction 0 -print_iter 50 -save_iter 100 -style_scale 1.0 -original_colors 0 -pooling max -proto_file models/VGG_ILSVRC_19_layers_deploy.prototxt -model_file models/VGG_ILSVRC_19_layers.caffemodel -backend cudnn -content_layers relu4_2 -style_layers relu1_1,relu2_1,relu3_1,relu4_1,relu5_1 -lambda 1000 -patch 3 -eps 1e-07 -f_radius 7 -f_edge 0.05 -content_seg examples/waterfront_seg700.png -style_seg examples/city_night_seg700.png -multigpu_strategy 8
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
loading matting laplacian...    examples/waterfront700.csv  
Capturing content targets   
nn.Sequential {
  [input -> (1) -> output]
  (1): nn.TVLoss
}
Capturing style target 1    
Running optimization with L-BFGS    
/home/ubuntu/distro/install/bin/lua: deepmatting_seg.lua:449: invalid device function
stack traceback:
    [C]: in function 'matting_laplacian'
    deepmatting_seg.lua:449: in function 'MattingLaplacian'
    deepmatting_seg.lua:380: in function 'opfunc'
    .../ubuntu/distro/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
    deepmatting_seg.lua:431: in function 'main'
    deepmatting_seg.lua:857: in main chunk
    [C]: in function 'dofile'
    ...distro/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
    [C]: ?
martinbenson commented 7 years ago

You'll need the build to complete and use nvidia-docker run ...

There should be no need to run download_models as that's done for you in the dockerfile.

Not advisable to use deepmatting.lua directly either - best to use the deep_photo python script.

Follow the instructions in the readme exactly and it should be OK (additionally deleting the SHELL line as per above).