gafr / chainer-fast-neuralstyle-models

Models for the chainer fast neuralstyle
142 stars 35 forks source link

Is train model error? #7

Closed sukerlove closed 8 years ago

sukerlove commented 8 years ago

use source: https://github.com/yusuketomoto/chainer-fast-neuralstyle

I get http://msvocds.blob.core.windows.net/coco2014/train2014.zip unzip train2014.zip then

suker@suker:~/chainer-fast-neuralstyle$ python train.py -s rio.jpg -d ./train2014 -g 0 /usr/local/lib/python2.7/dist-packages/pkg_resources/init.py:1298: UserWarning: /home/gtx1080/.python-eggs is writable by group/others and vulnerable to attack when used with get_resource_filename. Consider a more secure location (set with .set_extraction_path or the PYTHON_EGG_CACHE environment variable). warnings.warn(msg, UserWarning) num traning images: 82783 82783 iterations, 2 epochs /usr/local/lib/python2.7/dist-packages/chainer-1.14.0-py2.7-linux-x86_64.egg/chainer/cuda.py:87: UserWarning: cuDNN is not enabled. Please reinstall chainer after you install cudnn (see https://github.com/pfnet/chainer#installation). 'cuDNN is not enabled.\n' epoch 0 (epoch 0) batch 0/82783... training loss is...235437792.0 (epoch 0) batch 1/82783... training loss is...329452736.0 (epoch 0) batch 2/82783... training loss is...154218272.0 (epoch 0) batch 3/82783... training loss is...63537960.0 (epoch 0) batch 4/82783... training loss is...58619268.0 (epoch 0) batch 5/82783... training loss is...55183628.0 (epoch 0) batch 6/82783... training loss is...54082024.0

is operate OK?

after more then 5 hours, the train finish.

the output rio_0.model rio_0.state

rio_1.model rio_1.state

rio.model rio.state

I use rio.model, to gen picture, it seems not ok, is any step i'm wrong?

rio-train.zip

rio.jpg rio.model rio-dst.jpg

thanks

6o6o commented 8 years ago

Looks about right, but your reference style appears to be already a product of a style transfer operation in some other app on this image. Better choose original unaltered styles, train with --image_size 512 option and even then result may not always be plausible.

sukerlove commented 8 years ago

@6o6o thanks, i now try --image_size 512 I use command, but memory out

GPU gtx1080, memroy:32G

num traning images: 82783 82783 iterations, 2 epochs /usr/local/lib/python2.7/dist-packages/chainer/cuda.py:87: UserWarning: cuDNN is not enabled. Please reinstall chainer after you install cudnn (see https://github.com/pfnet/chainer#installation). 'cuDNN is not enabled.\n' epoch 0 (epoch 0) batch 0/82783... training loss is...255842176.0 Traceback (most recent call last): File "train.py", line 133, in feature_hat = vgg(y) File "/home/gtx1080/work/src/github.com/yusuketomoto/chainer-fast-neuralstyle-0829/net.py", line 97, in call h = F.max_pooling_2d(y1, 2, stride=2) File "/usr/local/lib/python2.7/dist-packages/chainer/functions/pooling/max_pooling_2d.py", line 173, in max_pooling_2d return MaxPooling2D(ksize, stride, pad, cover_all, use_cudnn)(x) File "/usr/local/lib/python2.7/dist-packages/chainer/function.py", line 130, in call outputs = self.forward(in_data) File "/usr/local/lib/python2.7/dist-packages/chainer/function.py", line 234, in forward return self.forward_gpu(inputs) File "/usr/local/lib/python2.7/dist-packages/chainer/functions/pooling/max_pooling_2d.py", line 39, in forward_gpu y = cuda.cupy.empty((n, c, y_h, y_w), dtype=x[0].dtype) File "/usr/local/lib/python2.7/dist-packages/cupy/creation/basic.py", line 20, in empty return cupy.ndarray(shape, dtype=dtype) File "cupy/core/core.pyx", line 85, in cupy.core.core.ndarray.init (cupy/core/core.cpp:5013) File "cupy/cuda/memory.pyx", line 275, in cupy.cuda.memory.alloc (cupy/cuda/memory.cpp:5515) File "cupy/cuda/memory.pyx", line 414, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8076) File "cupy/cuda/memory.pyx", line 430, in cupy.cuda.memory.MemoryPool.malloc (cupy/cuda/memory.cpp:8002) File "cupy/cuda/memory.pyx", line 337, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6970) File "cupy/cuda/memory.pyx", line 357, in cupy.cuda.memory.SingleDeviceMemoryPool.malloc (cupy/cuda/memory.cpp:6797) File "cupy/cuda/memory.pyx", line 255, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5457) File "cupy/cuda/memory.pyx", line 256, in cupy.cuda.memory._malloc (cupy/cuda/memory.cpp:5378) File "cupy/cuda/memory.pyx", line 31, in cupy.cuda.memory.Memory.init (cupy/cuda/memory.cpp:1540) File "cupy/cuda/runtime.pyx", line 196, in cupy.cuda.runtime.malloc (cupy/cuda/runtime.cpp:3067) File "cupy/cuda/runtime.pyx", line 126, in cupy.cuda.runtime.check_status (cupy/cuda/runtime.cpp:1982) cupy.cuda.runtime.CUDARuntimeError: cudaErrorMemoryAllocation: out of memory

6o6o commented 8 years ago

You need to install cuDNN. It reduces memory usage significantly. After installing CUDA, download cuDNN here, place the necessary files to your CUDA installation dir and reinstall chainer.

sukerlove commented 8 years ago

@6o6o thanks

I had install cudnn before

ls -la /usr/local/cuda-8.0/lib64/ ...... -rw-r--r-- 1 root root 737516 8月 11 18:21 libcudart_static.a lrwxrwxrwx 1 root root 35 8月 20 13:38 libcudnn.so -> /usr/local/cuda/lib64/libcudnn.so.5 lrwxrwxrwx 1 root root 39 8月 16 14:28 libcudnn.so.4 -> /usr/local/cuda/lib64/libcudnn.so.4.0.7 -rwxr-xr-x 1 root root 61453024 8月 16 14:15 libcudnn.so.4.0.7 lrwxrwxrwx 1 root root 39 8月 16 10:38 libcudnn.so.5 -> /usr/local/cuda/lib64/libcudnn.so.5.1.5 -rwxr-xr-x 1 root root 79337624 8月 11 18:42 libcudnn.so.5.1.5 lrwxrwxrwx 1 root root 40 8月 16 11:08 libcudnn.so.7.0 -> /usr/local/cuda/lib64/libcudnn.so.7.0.64

......

and set path: declare -x CPATH="/usr/local/cuda-8.0/include:/home/gtx1080/:" declare -x LD_LIBRARY_PATH="/home/gtx1080/torch/install/lib:/usr/local/cuda-8.0/lib64:" declare -x LIBRARY_PATH="/home/gtx1080/torch/install/lib:/usr/local/cuda-8.0/lib64:"

6o6o commented 8 years ago

Here is the original style image used to create the picture you provided

rio2016_look