jcjohnson / neural-style

Torch implementation of neural style algorithm
MIT License
18.31k stars 2.7k forks source link

OpenCL support #44

Closed napsternxg closed 8 years ago

napsternxg commented 9 years ago

I tried implementing OpenCL support and the code is at: https://github.com/napsternxg/neural-style/tree/opencl

However I get the following error when running the code:

$ $ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
fc6: 1 1 25088 4096
fc7: 1 1 4096 4096
fc8: 1 1 4096 1000
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
/home/torch/install/bin/luajit: C++ exception

I believe the issue is because of the SpatialConvolutionMM which is implemented in ccn2 module.

jcjohnson commented 9 years ago

Nice work! I took a look at your code and didn't see anything obviously wrong. Can you figure out exactly where it's crashing? My guess is either here where you cast the network to OpenCL, or here or here where you first try to run the network forward.

napsternxg commented 9 years ago

I will try to debug this and get back to this post in a few days. Unfortunately, I am new to lua and torch and this is the first code that I have written in this language, hence am still learning.

Also, it occurred to me today that can my GPU memory be an issue? I only have a 1 GB ATI FirePro 3900 GPU.

rkrzr commented 9 years ago

@napsternxg GPU memory could indeed be an issue: It seems that the memory requirements are growing exponentially (quadratically?) with the size of the image you are rendering. I am running in CPU mode on a 32GB machine and I start running out of memory at -image_size > 1024. So if you want to be sure that memory is not the problem, just run with -image_size 50 or so.

jcjohnson commented 9 years ago

Good call on GPU memory; 1GB is not enough for the default settings.

napsternxg commented 9 years ago

I tried running with -image_size 10 and still get the same error.

napsternxg commented 9 years ago

Ok using multiple print statements, I believe I have figured out the issue: @jcjohnson was right the issue is while casting the cnn object to cl()

I checked the clnn documentation and I see all the layers are implemented. Is there something I am missing ?

These are the first few lines of my generated models/VGG_ILSVRC_19_layers_deploy.prototxt.opencl.lua

require 'nn'
require 'clnn'
local model = {}
table.insert(model, {'conv1_1', nn.SpatialConvolutionMM(3, 64, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu1_1', nn.ReLU(true)})
table.insert(model, {'conv1_2', nn.SpatialConvolutionMM(64, 64, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu1_2', nn.ReLU(true)})
table.insert(model, {'pool1', nn.SpatialMaxPooling(2, 2, 2, 2, 0, 0):ceil()})
table.insert(model, {'conv2_1', nn.SpatialConvolutionMM(64, 128, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu2_1', nn.ReLU(true)})
table.insert(model, {'conv2_2', nn.SpatialConvolutionMM(128, 128, 3, 3, 1, 1, 1, 1)})
table.insert(model, {'relu2_2', nn.ReLU(true)})
table.insert(model, {'pool2', nn.SpatialMaxPooling(2, 2, 2, 2, 0, 0):ceil()})
jcjohnson commented 9 years ago

I'm not really sure what's wrong; here are two random ideas:

(1) In the .opencl.lua file maybe you also need to require 'cltorch'? (2) Maybe the call to ceil() for nn.SpatialMaxPooling() is not supported for clnn? You can chop out these method calls with some dirty string manipulation like this: https://github.com/jcjohnson/neural-style/commit/cba886c6c33ec53e4a6a56a67d5edf304cee88a0#diff-00b26e06a3b5ecc7938a4da2d6fe0332R49

vkorablin commented 9 years ago

@jcjohnson

Maybe the call to ceil() for nn.SpatialMaxPooling() is not supported for clnn

seems to be the case if I understand correctly: https://github.com/hughperkins/clnn/search?q=ceil

@hughperkins could you confirm?

hughperkins commented 9 years ago

Yes, ceil() is not currently implemented. Per @szagoruyko , this should be fairly easy to add https://github.com/hughperkins/clnn/issues/5 Not sure I have time in the immediate future, but seems to be a popular request, so I might find time, if it's still open in a week or two.

(Edit: an alternative way to hack this for now, if you dont need the functionality behind :ceil(), just that the method call doesnt throw an exception, would be to add something to your code like:

function nn.SpatialMaxPooling:ceil()
   return self
end

This will monkey-patch SpatialMaxPooling to have this method, although the method wont actually do anything for now )

(PS Wow, the pictures of output from the neural-style project on the front-page README.md look awesome :-O )

(Edit 3: by the way, when th crashes, it is often the case that running directly with luajit instead produces fractionally more error information. Typically you'd probably want to run it from gdb too, and get the callstack. I have a script called rungdb.sh, which looks like:

#!/bin/bash
gdb $1 -ex "catch throw" -ex "run $2 $3 $4 $5 $6 $7 $8 $9" 

then I run it like:

rungdb.sh luajit myluascript.lua
# and once it's crashed, type:
bt
# ... to get the backtrace

you need to build in debug mode to get line numbers and stuff. I usually do this by editing the rocks file for hte relevant torch projects, to have -DCMAKE_BUILD_TYPE=Debug, and then do luarocks make rocks/name-of-luarocks-file.rockspec, to reinstall it )

napsternxg commented 9 years ago

Thanks @hughperkins. I ran GDB on the file and here is the result.

$ gdb luajit -ex "catch throw"
GNU gdb (Ubuntu 7.7.1-0ubuntu5~14.04.2) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
<http://www.gnu.org/software/gdb/bugs/>.
Find the GDB manual and other documentation resources online at:
<http://www.gnu.org/software/gdb/documentation/>.
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from luajit...(no debugging symbols found)...done.
Catchpoint 1 (throw)
(gdb) run neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
Starting program: /home/username/torch/install/bin/luajit neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Traceback (most recent call last):
  File "/usr/share/gdb/auto-load/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.19-gdb.py", line 63, in <module>
    from libstdcxx.v6.printers import register_libstdcxx_printers
ImportError: No module named 'libstdcxx'
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Finished proto to lua
In iteration %d 1
conv1_1: 64 3 3 3
In iteration %d 2
In iteration %d 3
conv1_2: 64 64 3 3
In iteration %d 4
In iteration %d 5
In iteration %d 6
conv2_1: 128 64 3 3
In iteration %d 7
In iteration %d 8
conv2_2: 128 128 3 3
In iteration %d 9
In iteration %d 10
In iteration %d 11
conv3_1: 256 128 3 3
In iteration %d 12
In iteration %d 13
conv3_2: 256 256 3 3
In iteration %d 14
In iteration %d 15
conv3_3: 256 256 3 3
In iteration %d 16
In iteration %d 17
conv3_4: 256 256 3 3
In iteration %d 18
In iteration %d 19
In iteration %d 20
conv4_1: 512 256 3 3
In iteration %d 21
In iteration %d 22
conv4_2: 512 512 3 3
In iteration %d 23
In iteration %d 24
conv4_3: 512 512 3 3
In iteration %d 25
In iteration %d 26
conv4_4: 512 512 3 3
In iteration %d 27
In iteration %d 28
In iteration %d 29
conv5_1: 512 512 3 3
In iteration %d 30
In iteration %d 31
conv5_2: 512 512 3 3
In iteration %d 32
In iteration %d 33
conv5_3: 512 512 3 3
In iteration %d 34
In iteration %d 35
conv5_4: 512 512 3 3
In iteration %d 36
In iteration %d 37
In iteration %d 38
In iteration %d 39
fc6: 1 1 25088 4096
In iteration %d 40
In iteration %d 41
In iteration %d 42
fc7: 1 1 4096 4096
In iteration %d 43
In iteration %d 44
In iteration %d 45
fc8: 1 1 4096 1000
In iteration %d 46
Finished iterations     clnn
Finished network setup
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
[New Thread 0x7fffb2cc1700 (LWP 5091)]
Catchpoint 1 (exception thrown), 0x00007fffc389b8b0 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6

And here is the back trace:

(gdb) bt
#0  0x00007fffc389b8b0 in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#1  0x00007fffc2f3719b in EasyCL::checkError (error=<optimized out>) at /home/username/Downloads/cltorch/src/EasyCL/EasyCL.cpp:514
#2  0x00007fffc2f42159 in CLWrapper::createOnDevice (this=0x910b20) at /home/username/Downloads/cltorch/src/EasyCL/CLWrapper.cpp:62
#3  0x00007fffc3170c9c in THClStorage_resize (state=<optimized out>, self=<optimized out>, size=102760448) at /home/username/Downloads/cltorch/src/lib/THClStorage.cpp:196
#4  0x00007fffc33f69f1 in torch_ClStorage_resize (L=0x40000378) at /home/username/Downloads/cltorch/src/torch/generic/Storage.cpp:114
#5  0x000000000047d01a in lj_BC_FUNCC ()
#6  0x000000000046c5fd in lua_pcall ()
#7  0x0000000000406f4f in pmain ()
#8  0x000000000047d01a in lj_BC_FUNCC ()
#9  0x000000000046c677 in lua_cpcall ()
#10 0x0000000000404f04 in main ()
hughperkins commented 9 years ago

Ok, good. Then, if you do the following you should get the error message, I think:

f 1
print message

I suspect, given where it is, and what it's doing, that it might say 'out of memory', ie "CL_MEM_OBJECT_ALLOCATION_FAILURE".

hughperkins commented 9 years ago

Note that since using gdb is kind of annoying :-P So, I've pushed a couple of updates to cltorch, that will catch the exception, and convert it into a torch error, eg:

$ th /tmp/runst.lua
Using NVIDIA Corporation , OpenCL platform: NVIDIA CUDA
Using OpenCL device: GeForce 940M
a   
1e-38 *
 6.8234
[torch.ClStorage of size 1]

/home/user/torch/install/bin/luajit: /tmp/runst.lua:4: Something went wrong: std::bad_alloc at /home/user/git/cltorch/src/torch/generic/Storage.cpp:127
stack traceback:
    [C]: in function 'resize'
    /tmp/runst.lua:4: in main chunk
    [C]: in function 'dofile'
    ...user/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00406670
Segmentation fault

It still contains a lot of 'magic messages', but mildly more informative than before perhaps? You can update to this version by simply rerunning luarocks install cltorch

jcjohnson commented 9 years ago

@hughperkins You rock! Thanks for helping out on the OpenCL port - I know almost nothing about OpenCL.

napsternxg commented 9 years ago

Thanks a lot @hughperkins for looking into this. I have not yet updated cltorch but have run f 1 in gbd and got the following output:

(gdb) f 1
#1  0x00007fffc2f3719b in EasyCL::checkError (error=<optimized out>) at /home/username/Downloads/cltorch/src/EasyCL/EasyCL.cpp:514
514             throw std::runtime_error( std::string("OpenCL error, code: ") + message );
(gdb) print message
$1 = {static npos = <optimized out>, _M_dataplus = {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No data fields>}, <No data fields>}, 
    _M_p = 0x910b78 "CL_INVALID_BUFFER_SIZE"}}

I will run my code again with the new cltorch and report any other findings.

hughperkins commented 9 years ago

"Invalid buffer size". Hmmm. It probably means that trying to allocate a buffer that is far too big, or perhaps wont fit in available memory. It plausibly could mean that the size of the buffer being requested has been corrupted somehow. However, looking at the stack trace you provided earlier, we can see the size in frame 3, size=102760448, which is number of floats I believe, so is about 400MB. It sounds like a non-corrupted number. It sounds like an amount large enough to either have exhausted available GPU memory, or to be larger than maximum GPU buffer alloc size.

For the second point, maximum GPU buffer alloc size, you might have an executable in ~/torch/install/bin called gpuinfo. If you run this, it will give an output like:

num platforms: 1

platform index: 0:
platform id: 0x1ea39b0
platform vendor: NVIDIA Corporation
platform name: NVIDIA CUDA
platform num devices: 1

   device index: 0
   device id: 0x1ea3a70
   device type: 4
   global memory size: 1023MB
   local memory size: 47KB
   global cache size: 48KB
   global cacheline size: 128
   max memory alloc size: 255MB
   max compute units: 3
   max workgroup size: 1024
   max workitem dimensions: 3
   max workitem sizes: 1024 1024 64
   device name: GeForce 940M
   opencl c version: OpenCL C 1.1 
   opencl device version: OpenCL 1.1 CUDA
   frequency MHz: 980

On line 8, you can see 'max memory alloc size', ie the largest buffer you can allocate at once. For my laptop, it is 256MB, less than 400MB.

napsternxg commented 9 years ago

Here is the output after updating my cltorch package.

th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Finished proto to lua   
In iteration %d 1
conv1_1: 64 3 3 3
In iteration %d 2
In iteration %d 3
conv1_2: 64 64 3 3
In iteration %d 4
In iteration %d 5
In iteration %d 6
conv2_1: 128 64 3 3
In iteration %d 7
In iteration %d 8
conv2_2: 128 128 3 3
In iteration %d 9
In iteration %d 10
In iteration %d 11
conv3_1: 256 128 3 3
In iteration %d 12
In iteration %d 13
conv3_2: 256 256 3 3
In iteration %d 14
In iteration %d 15
conv3_3: 256 256 3 3
In iteration %d 16
In iteration %d 17
conv3_4: 256 256 3 3
In iteration %d 18
In iteration %d 19
In iteration %d 20
conv4_1: 512 256 3 3
In iteration %d 21
In iteration %d 22
conv4_2: 512 512 3 3
In iteration %d 23
In iteration %d 24
conv4_3: 512 512 3 3
In iteration %d 25
In iteration %d 26
conv4_4: 512 512 3 3
In iteration %d 27
In iteration %d 28
In iteration %d 29
conv5_1: 512 512 3 3
In iteration %d 30
In iteration %d 31
conv5_2: 512 512 3 3
In iteration %d 32
In iteration %d 33
conv5_3: 512 512 3 3
In iteration %d 34
In iteration %d 35
conv5_4: 512 512 3 3
In iteration %d 36
In iteration %d 37
In iteration %d 38
In iteration %d 39
fc6: 1 1 25088 4096
In iteration %d 40
In iteration %d 41
In iteration %d 42
fc7: 1 1 4096 4096
In iteration %d 43
In iteration %d 44
In iteration %d 45
fc8: 1 1 4096 1000
In iteration %d 46
Finished iterations     clnn
Finished network setup  
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
/home/username/Downloads/torch/install/bin/luajit: ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:11: Something went wrong: OpenCL error, code: CL_INVALID_BUFFER_SIZE at /tmp/luarocks_cltorch-scm-1-1524/cltorch/cltorch/src/torch/generic/Storage.cpp:127
stack traceback:
        [C]: in function 'resize'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:11: in function 'torch_Storage_type'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:57: in function 'recursiveType'
        ...tity/Downloads/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'type'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:45: in function 'recursiveType'
        ...ntity/Downloads/torch/install/share/lua/5.1/nn/utils.lua:41: in function 'recursiveType'
        ...tity/Downloads/torch/install/share/lua/5.1/nn/Module.lua:123: in function 'cl'
        neural_style_opencl.lua:66: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670
hughperkins commented 9 years ago

Ok, looks like the error message is more informative than the original c++ exception, which is good. As far as this specific error, please see my comment that I sent about the same time as yours just now. I reckon the buffer size being requested is larger than is supported by the card. So, will need to somehow do ... something ... to reduce the buffer size requested, eg use a smaller input image perhaps.

napsternxg commented 9 years ago

If you notice in the command I am running. I am setting the image_size to 10. Should I try with even a smaller number ? The default is 50 I believe.

Also, I couldn't find gpuinfo in my torch folder.

hughperkins commented 9 years ago

Hmmmm.... 10? You mean, it's a 10 by 10 image?

Edit: for gpuinfo, you might have a system/opencl command clinfo. That command doesnt work for me, but should give a bunch of information I believe.

napsternxg commented 9 years ago

These are the lines where the image transformation is taking place:

  local content_image = image.load(params.content_image, 3)
  content_image = image.scale(content_image, params.image_size, 'bilinear')
  local content_image_caffe = preprocess(content_image):float()

  local style_image = image.load(params.style_image, 3)
  local style_size = math.ceil(params.style_scale * params.image_size)
  style_image = image.scale(style_image, style_size, 'bilinear')
  local style_image_caffe = preprocess(style_image):float()

I believe it is re-sizing it to 10x10 but am not fully sure about this.

hughperkins commented 9 years ago

Hmmm, looks like it is the fully-connected layer that is causing the 400MB alloc:

25088*4096*4/1024/1024
= 392MB
hughperkins commented 9 years ago

If you modify the conv5_4 layer to have eg 256 output planes, instead of 512, then you can probably reduce the fc6 layer from 25088 => 4096 to 12544 => 4096, which might fit into the card's maximum alloc?

napsternxg commented 9 years ago

I couldn't find the gpuinfo file in my torch folder. Is there any other way to figure out my maximum GPU buffer alloc size ?

hughperkins commented 9 years ago

Try clinfo

napsternxg commented 9 years ago

This is the input from clinfo

$ clinfo 
Number of platforms:                             1
  Platform Profile:                              FULL_PROFILE
  Platform Version:                              OpenCL 2.0 AMD-APP (1642.5)
  Platform Name:                                 AMD Accelerated Parallel Processing
  Platform Vendor:                               Advanced Micro Devices, Inc.
  Platform Extensions:                           cl_khr_icd cl_amd_event_callback cl_amd_offline_devices 

  Platform Name:                                 AMD Accelerated Parallel Processing
Number of devices:                               2
  Device Type:                                   CL_DEVICE_TYPE_GPU
  Vendor ID:                                     1002h
  Board name:                                    
  Device Topology:                               PCI[ B#5, D#0, F#0 ]
  Max compute units:                             6
  Max work items dimensions:                     3
    Max work items[0]:                           256
    Max work items[1]:                           256
    Max work items[2]:                           256
  Max work group size:                           256
  Preferred vector width char:                   16
  Preferred vector width short:                  8
  Preferred vector width int:                    4
  Preferred vector width long:                   2
  Preferred vector width float:                  4
  Preferred vector width double:                 0
  Native vector width char:                      16
  Native vector width short:                     8
  Native vector width int:                       4
  Native vector width long:                      2
  Native vector width float:                     4
  Native vector width double:                    0
  Max clock frequency:                           650Mhz
  Address bits:                                  32
  Max memory allocation:                         134217728
  Image support:                                 Yes
  Max number of images read arguments:           128
  Max number of images write arguments:          8
  Max image 2D width:                            16384
  Max image 2D height:                           16384
  Max image 3D width:                            2048
  Max image 3D height:                           2048
  Max image 3D depth:                            2048
  Max samplers within kernel:                    16
  Max size of kernel argument:                   1024
  Alignment (bits) of base address:              2048
  Minimum alignment (bytes) for any datatype:    128
  Single precision floating point capability
    Denorms:                                     No
    Quiet NaNs:                                  Yes
    Round to nearest even:                       Yes
    Round to zero:                               Yes
    Round to +ve and infinity:                   Yes
    IEEE754-2008 fused multiply-add:             Yes
  Cache type:                                    None
  Cache line size:                               0
  Cache size:                                    0
  Global memory size:                            536870912
  Constant buffer size:                          65536
  Max number of constant args:                   8
  Local memory type:                             Scratchpad
  Local memory size:                             32768
  Max pipe arguments:                            0
  Max pipe active reservations:                  0
  Max pipe packet size:                          0
  Max global variable size:                      0
  Max global variable preferred total size:      0
  Max read/write image args:                     0
  Max on device events:                          0
  Queue on device max size:                      0
  Max on device queues:                          0
  Queue on device preferred size:                0
  SVM capabilities:                              
    Coarse grain buffer:                         No
    Fine grain buffer:                           No
    Fine grain system:                           No
    Atomics:                                     No
  Preferred platform atomic alignment:           0
  Preferred global atomic alignment:             0
  Preferred local atomic alignment:              0
  Kernel Preferred work group size multiple:     64
  Error correction support:                      0
  Unified memory for Host and Device:            0
  Profiling timer resolution:                    1
  Device endianess:                              Little
  Available:                                     Yes
  Compiler available:                            Yes
  Execution capabilities:                                
    Execute OpenCL kernels:                      Yes
    Execute native function:                     No
  Queue on Host properties:                              
    Out-of-Order:                                No
    Profiling :                                  Yes
  Queue on Device properties:                            
    Out-of-Order:                                No
    Profiling :                                  No
  Platform ID:                                   0x7fd922436fd0
  Name:                                          Turks
  Vendor:                                        Advanced Micro Devices, Inc.
  Device OpenCL C version:                       OpenCL C 1.2 
  Driver version:                                1642.5
  Profile:                                       FULL_PROFILE
  Version:                                       OpenCL 1.2 AMD-APP (1642.5)
  Extensions:                                    cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_amd_image2d_from_buffer_read_only cl_khr_spir cl_khr_gl_event 

  Device Type:                                   CL_DEVICE_TYPE_CPU
  Vendor ID:                                     1002h
  Board name:                                    
  Max compute units:                             24
  Max work items dimensions:                     3
    Max work items[0]:                           1024
    Max work items[1]:                           1024
    Max work items[2]:                           1024
  Max work group size:                           1024
  Preferred vector width char:                   16
  Preferred vector width short:                  8
  Preferred vector width int:                    4
  Preferred vector width long:                   2
  Preferred vector width float:                  8
  Preferred vector width double:                 4
  Native vector width char:                      16
  Native vector width short:                     8
  Native vector width int:                       4
  Native vector width long:                      2
  Native vector width float:                     8
  Native vector width double:                    4
  Max clock frequency:                           1200Mhz
  Address bits:                                  64
  Max memory allocation:                         8415937536
  Image support:                                 Yes
  Max number of images read arguments:           128
  Max number of images write arguments:          64
  Max image 2D width:                            8192
  Max image 2D height:                           8192
  Max image 3D width:                            2048
  Max image 3D height:                           2048
  Max image 3D depth:                            2048
  Max samplers within kernel:                    16
  Max size of kernel argument:                   4096
  Alignment (bits) of base address:              1024
  Minimum alignment (bytes) for any datatype:    128
  Single precision floating point capability
    Denorms:                                     Yes
    Quiet NaNs:                                  Yes
    Round to nearest even:                       Yes
    Round to zero:                               Yes
    Round to +ve and infinity:                   Yes
    IEEE754-2008 fused multiply-add:             Yes
  Cache type:                                    Read/Write
  Cache line size:                               64
  Cache size:                                    32768
  Global memory size:                            33663750144
  Constant buffer size:                          65536
  Max number of constant args:                   8
  Local memory type:                             Global
  Local memory size:                             32768
  Max pipe arguments:                            16
  Max pipe active reservations:                  16
  Max pipe packet size:                          4120970240
  Max global variable size:                      1879048192
  Max global variable preferred total size:      1879048192
  Max read/write image args:                     64
  Max on device events:                          0
  Queue on device max size:                      0
  Max on device queues:                          0
  Queue on device preferred size:                0
  SVM capabilities:                              
    Coarse grain buffer:                         Yes
    Fine grain buffer:                           Yes
    Fine grain system:                           Yes
    Atomics:                                     Yes
  Preferred platform atomic alignment:           0
  Preferred global atomic alignment:             0
  Preferred local atomic alignment:              0
  Kernel Preferred work group size multiple:     1
  Error correction support:                      0
  Unified memory for Host and Device:            1
  Profiling timer resolution:                    1
  Device endianess:                              Little
  Available:                                     Yes
  Compiler available:                            Yes
  Execution capabilities:                                
    Execute OpenCL kernels:                      Yes
    Execute native function:                     Yes
  Queue on Host properties:                              
    Out-of-Order:                                No
    Profiling :                                  Yes
  Queue on Device properties:                            
    Out-of-Order:                                No
    Profiling :                                  No
  Platform ID:                                   0x7fd922436fd0
  Name:                                          Intel(R) Xeon(R) CPU E5-2640 0 @ 2.50GHz
  Vendor:                                        GenuineIntel
  Device OpenCL C version:                       OpenCL C 1.2 
  Driver version:                                1642.5 (sse2,avx)
  Profile:                                       FULL_PROFILE
  Version:                                       OpenCL 1.2 AMD-APP (1642.5)
  Extensions:                                    cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_khr_gl_event 
hughperkins commented 9 years ago

Global memory size: 536870912 => your card has 512MB memory, right? Max memory allocation: 134217728 => max alloc size 128MB

So, what you can try doing is changing conv5_4 layer to have 128 output planes, and change fc6 from 25088 => 4096 to 6272 => 4096

napsternxg commented 9 years ago

I made the changes and am still getting a new error:

$ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10
In Function main
Starting load model
In loadcaffe_load
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:505] Reading dangerously large protocol message.  If the message turns out to be larger than 1073741824 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 574671192
Successfully loaded models/VGG_ILSVRC_19_layers.caffemodel
Updated Line to: %s     table.insert(model, {'conv5_4', nn.SpatialConvolutionMM(128, 128, 3, 3, 1, 1, 1, 1)})   
Updated Line to: %s     table.insert(model, {'fc6', nn.Linear(6272, 4096)})
Finished proto to lua   
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
/home/username/Downloads/torch/install/bin/luajit: ...y/Downloads/torch/install/share/lua/5.1/cltorch/init.lua:30: inconsistent tensor size at /home/username/Downloads/torch/pkg/torch/lib/TH/generic/THTensorCopy.c:21
stack traceback:
        [C]: in function 'cloldcopy'
        ...y/Downloads/torch/install/share/lua/5.1/cltorch/init.lua:30: in function 'copy'
        ./loadcaffe_wrapper.lua:97: in function 'load'
        neural_style_opencl.lua:62: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670

I believe the reason is that the original caffee model has that layer size which can't be changed. This neural-style code is only to do inference and not train the model. Hence, I should use the original model. Probably @jcjohnson can confirm on this.

hughperkins commented 9 years ago

Hmmm, right, your explanation appears to match the error message. I guess you will need to use a smaller model perhaps. How about this one? https://gist.github.com/mavenlin/d802a5849de39225bcc6

napsternxg commented 9 years ago

I have pushed my changes to https://github.com/napsternxg/neural-style/tree/opencl

I will try to run with the smaller model. In the meanwhile will it be possible for you to run my code and see if it works, maybe the issue is only my GPU memory. It would be great to know if the port works on other opencl systems without using any cuda libraries.

vkorablin commented 9 years ago

Tried it on my 1Gb card.

Command line: th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 25

Got further than @napsternxg managed (so it does seem to be the lack of GPU memory in his case), but then got a 'not implemented':

/home/vkorablin/torch/install/bin/luajit: ...lin/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: Not implemented at /tmp/luarocks_clnn-scm-1-7207/clnn/SpatialMaxPooling.cpp:166
stack traceback:
    [C]: in function 'SpatialMaxPooling_updateGradInput'
    ...lin/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: in function 'updateGradInput'
    /home/vkorablin/torch/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
    .../vkorablin/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
    neural_style_opencl.lua:244: in function 'opfunc'
    /home/vkorablin/torch/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
    neural_style_opencl.lua:263: in function 'main'
    neural_style_opencl.lua:424: in main chunk
    [C]: in function 'dofile'
    ...blin/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
    [C]: at 0x00406670

The line that throws that error: https://github.com/hughperkins/clnn/blob/6f79cd72d4a2434dd55d5e8a365013c632146155/SpatialMaxPooling.cpp#L166

napsternxg commented 9 years ago

Yes, I just tried it with the vgg_normalized.caffeemodel file which comes with the code and also got further than my last results. However, I am getting a different error than yours:

$ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 10 -model_file models/vgg_normalised.caffemodel 
In Function main
Starting load model
In loadcaffe_load
Successfully loaded models/vgg_normalised.caffemodel
Finished proto to lua   
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
Finished iterations     clnn
Finished network setup  
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
Finished content Image preprocess
Finished style Image preprocess 
Finished caffe variables
Starting network setup  
Apply_1t_1s_0pt_-2_*out = val1 build log: 
"/tmp/OCL12780T5.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_0s_0pt_-2_*out = (*out > 0) ? *out : 0 build log: 
"/tmp/OCL12780T19.cl", line 49: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out *= val1 build log: 
"/tmp/OCL12780T26.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

allocate workbuffer
Apply_2t_0s_0pt_-2_-2_*out -= *in1 build log: 
"/tmp/OCL12780T29.cl", line 56: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out = pown(*out, val1) build log: 
"/tmp/OCL12780T32.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

THClReduceAll.cl build log: 
"/tmp/OCL12780T38.cl", line 9: warning: variable "in1" was declared but never
          referenced
    float *in1 = &_in1;
           ^

"/tmp/OCL12780T38.cl", line 10: warning: variable "out" was declared but never
          referenced
    float *out = &_out;
           ^

allocate workbuffer
allocate workbuffer
/home/username/Downloads/torch/install/bin/luajit: ...ads/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:36: bad argument #2 to 'SpatialMaxPooling_updateOutput' (input image smaller than kernel size)
stack traceback:
        [C]: in function 'SpatialMaxPooling_updateOutput'
        ...ads/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:36: in function 'updateOutput'
        .../Downloads/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
        neural_style_opencl.lua:149: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670
vkorablin commented 9 years ago

@napsternxg right, I got that one too.

Increase your image size, I think that's the problem here (note it says "input image smaller than kernel size"). I used -image_size 25 above.

napsternxg commented 9 years ago

Here are few more errors while running with different configurations using model_file=vgg_normalised.caffemodel:

Using default image_size=512

$ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -model_file models/vgg_normalised.caffemodel 
In Function main
Starting load model
In loadcaffe_load
Successfully loaded models/vgg_normalised.caffemodel
Finished proto to lua   
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
Finished iterations     clnn
Finished network setup  
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
Finished content Image preprocess
Finished style Image preprocess 
Finished caffe variables
Starting network setup  
Apply_1t_1s_0pt_-2_*out = val1 build log: 
"/tmp/OCL12816T5.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_0s_0pt_-2_*out = (*out > 0) ? *out : 0 build log: 
"/tmp/OCL12816T19.cl", line 49: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out *= val1 build log: 
"/tmp/OCL12816T26.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

allocate workbuffer
Apply_2t_0s_0pt_-2_-2_*out -= *in1 build log: 
"/tmp/OCL12816T29.cl", line 56: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out = pown(*out, val1) build log: 
"/tmp/OCL12816T32.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

THClReduceAll.cl build log: 
"/tmp/OCL12816T38.cl", line 9: warning: variable "in1" was declared but never
          referenced
    float *in1 = &_in1;
           ^

"/tmp/OCL12816T38.cl", line 10: warning: variable "out" was declared but never
          referenced
    float *out = &_out;
           ^

/home/username/Downloads/torch/install/bin/luajit: C++ exception

Another one running with image_size=50

$ th neural_style_opencl.lua -style_image examples/inputs/picasso_selfport1907.jpg -content_image examples/inputs/brad_pitt.jpg -gpu 0 -backend 'clnn' -output_image profile.png -image_size 50 -model_file models/vgg_normalised.caffemodel 
In Function main
Starting load model
In loadcaffe_load
Successfully loaded models/vgg_normalised.caffemodel
Finished proto to lua   
conv1_1: 64 3 3 3
conv1_2: 64 64 3 3
conv2_1: 128 64 3 3
conv2_2: 128 128 3 3
conv3_1: 256 128 3 3
conv3_2: 256 256 3 3
conv3_3: 256 256 3 3
conv3_4: 256 256 3 3
conv4_1: 512 256 3 3
conv4_2: 512 512 3 3
conv4_3: 512 512 3 3
conv4_4: 512 512 3 3
conv5_1: 512 512 3 3
conv5_2: 512 512 3 3
conv5_3: 512 512 3 3
conv5_4: 512 512 3 3
Finished iterations     clnn
Finished network setup  
Using Advanced Micro Devices, Inc. , OpenCL platform: AMD Accelerated Parallel Processing
Using OpenCL device: Turks
Finished content Image preprocess
Finished style Image preprocess 
Finished caffe variables
Starting network setup  
Apply_1t_1s_0pt_-2_*out = val1 build log: 
"/tmp/OCL12852T5.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_0s_0pt_-2_*out = (*out > 0) ? *out : 0 build log: 
"/tmp/OCL12852T19.cl", line 49: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out *= val1 build log: 
"/tmp/OCL12852T26.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

allocate workbuffer
Apply_2t_0s_0pt_-2_-2_*out -= *in1 build log: 
"/tmp/OCL12852T29.cl", line 56: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_1t_1s_0pt_-2_*out = pown(*out, val1) build log: 
"/tmp/OCL12852T32.cl", line 53: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

THClReduceAll.cl build log: 
"/tmp/OCL12852T38.cl", line 9: warning: variable "in1" was declared but never
          referenced
    float *in1 = &_in1;
           ^

"/tmp/OCL12852T38.cl", line 10: warning: variable "out" was declared but never
          referenced
    float *out = &_out;
           ^

allocate workbuffer
allocate workbuffer
allocate workbuffer
allocate workbuffer
allocate workbuffer
Running optimization with L-BFGS
Apply_3t_0s_0pt_-2_-2_-2_*out = 7.62939453125e-06 * (*in1 - *in2) build log: 
"/tmp/OCL12852T89.cl", line 37: warning: double-precision constant is
          represented as single-precision constant because double is not
          enabled
      *out = 7.62939453125e-06 * (*in1 - *in2);
             ^

"/tmp/OCL12852T89.cl", line 63: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_2t_0s_0pt_-2_-2_*out += *in1 build log: 
"/tmp/OCL12852T100.cl", line 56: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

Apply_3t_0s_0pt_-2_-2_-2_*out = (*in1 > 0) ? *in2 : 0.0f build log: 
"/tmp/OCL12852T103.cl", line 63: warning: variable "thisLinearId" was declared
          but never referenced
        int thisLinearId;
            ^

/home/username/Downloads/torch/install/bin/luajit: ...ads/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: Not implemented at /tmp/luarocks_clnn-scm-1-7940/clnn/SpatialMaxPooling.cpp:166
stack traceback:
        [C]: in function 'SpatialMaxPooling_updateGradInput'
        ...ads/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:41: in function 'updateGradInput'
        ...tity/Downloads/torch/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
        .../Downloads/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
        neural_style_opencl.lua:244: in function 'opfunc'
        ...ty/Downloads/torch/install/share/lua/5.1/optim/lbfgs.lua:66: in function 'lbfgs'
        neural_style_opencl.lua:263: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670

@vkorablin I get the same error as above using image_size=25

hughperkins commented 9 years ago

Hi,

napsternxg commented 9 years ago

@hughperkins looks line the new clnn build is broken. I am getting the following error when installing clnn

$ luarocks install clnn
Installing https://raw.githubusercontent.com/torch/rocks/master/clnn-scm-1.rockspec...
Using https://raw.githubusercontent.com/torch/rocks/master/clnn-scm-1.rockspec... switching to 'build' mode
Cloning into 'clnn'...
remote: Counting objects: 71, done.
remote: Compressing objects: 100% (65/65), done.
remote: Total 71 (delta 12), reused 31 (delta 3), pack-reused 0
Receiving objects: 100% (71/71), 79.40 KiB | 0 bytes/s, done.
Resolving deltas: 100% (12/12), done.
Checking connectivity... done.
cmake -E make_directory build && cd build && cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo -DCMAKE_PREFIX_PATH="/home/username/Downloads/torch/install/bin/.." -DCMAKE_INSTALL_PREFIX="/home/username/Downloads/torch/install/lib/luarocks/rocks/clnn/scm-1" && make -j$(getconf _NPROCESSORS_ONLN) install

-- The C compiler identification is GNU 4.8.4
-- The CXX compiler identification is GNU 4.8.4
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found Torch7 in /home/username/Downloads/torch/install
Torch_INSTALL_LIB /home/username/Downloads/torch/install/lib
-- Configuring done
-- Generating done
-- Build files have been written to: /tmp/luarocks_clnn-scm-1-7635/clnn/build
Scanning dependencies of target clnn
Scanning dependencies of target clnn_static
[ 10%] [ 20%] [ 30%] [ 50%] [ 50%] [ 60%] [ 70%] [ 80%] [ 90%] Building CXX object CMakeFiles/clnn_static.dir/init.cpp.o
Building CXX object CMakeFiles/clnn.dir/init.cpp.o
Building CXX object CMakeFiles/clnn_static.dir/utils.cpp.o
Building CXX object CMakeFiles/clnn.dir/SpatialConvolutionMM.cpp.o
[100%] Building CXX object CMakeFiles/clnn.dir/SpatialMaxPooling.cpp.o
Building CXX object CMakeFiles/clnn.dir/utils.cpp.o
Building CXX object CMakeFiles/clnn_static.dir/SpatialMaxPooling.cpp.o
Building CXX object CMakeFiles/clnn.dir/SpatialAveragePooling.cpp.o
Building CXX object CMakeFiles/clnn_static.dir/SpatialAveragePooling.cpp.o
Building CXX object CMakeFiles/clnn_static.dir/SpatialConvolutionMM.cpp.o
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialMaxPooling.cpp:8:0:
/home/username/Downloads/torch/install/include/easycl/EasyCL.h:15:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialMaxPooling.cpp:8:0:
/home/username/Downloads/torch/install/include/easycl/EasyCL.h:15:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialAveragePooling.cpp:9:0:
/home/username/Downloads/torch/install/include/easycl/EasyCL.h:15:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialAveragePooling.cpp:9:0:
/home/username/Downloads/torch/install/include/easycl/EasyCL.h:15:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialConvolutionMM.cpp:10:0:
/home/username/Downloads/torch/install/include/easycl/DeviceInfo.h:9:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
In file included from /tmp/luarocks_clnn-scm-1-7635/clnn/SpatialConvolutionMM.cpp:10:0:
/home/username/Downloads/torch/install/include/easycl/DeviceInfo.h:9:19: fatal error: CL/cl.h: No such file or directory
 #include <CL/cl.h>
                   ^
compilation terminated.
make[2]: *** [CMakeFiles/clnn_static.dir/SpatialAveragePooling.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: *** [CMakeFiles/clnn.dir/SpatialMaxPooling.cpp.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[2]: *** [CMakeFiles/clnn.dir/SpatialAveragePooling.cpp.o] Error 1
make[2]: *** [CMakeFiles/clnn_static.dir/SpatialMaxPooling.cpp.o] Error 1
make[2]: *** [CMakeFiles/clnn_static.dir/SpatialConvolutionMM.cpp.o] Error 1
make[1]: *** [CMakeFiles/clnn_static.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....
make[2]: *** [CMakeFiles/clnn.dir/SpatialConvolutionMM.cpp.o] Error 1
make[1]: *** [CMakeFiles/clnn.dir/all] Error 2
make: *** [all] Error 2

Error: Build error: Failed building.
hughperkins commented 9 years ago

Ok, do you mind if I ask what OS you are using?

napsternxg commented 9 years ago

Ubuntu 14.04 LTS

hughperkins commented 9 years ago

Hmmm, ok. Me too. Builds ok for me. Odd. Will take a look....

napsternxg commented 9 years ago

Maybe you failed to check in the file ? Possible ?

hughperkins commented 9 years ago

Well, I tested just now by doing:

cd ~/torch/install/include
rm -Rf clew.h easycl THCl
luarocks remove clnn
luarocks remove cltorch
luarocks install cltorch
luarocks install clnn

... and it worked ok.

napsternxg commented 9 years ago

I followed exactly the same steps as yours. Still getting the same error.

hughperkins commented 9 years ago

Basically, I added a switch USE_CLEW into the easycl a couple of weeks ago, to fix a different problem where using clew was causing problems. Issue is that clnn doesnt define USE_CLEW, so a bunch of stuff fails. I think I need to ponder how to add USE_CLEW define into clnn build somehow. Still, it is odd that build succeeds on my machine, I must have some include files somewhere.

hughperkins commented 9 years ago

I followed exactly the same steps as yours. Still getting the same error.

Ok.

As a workaround for now, you could try sudo apt-get install -y opencl-headers. ideally, shoudlnt need to do that, but I reckon there are enough issues right now, without thinking how to avoid doing that, and probably painless enough to simply install this?

napsternxg commented 9 years ago

OK now the build succeeded. Let me try to run the code now.

hughperkins commented 9 years ago

Ok :-)

napsternxg commented 9 years ago

Ok still some errors. But this time it went quite far.

/home/username/Downloads/torch/install/bin/luajit: Please copy to cpu, using :float(), then set the value, then copy back using :cl() at /tmp/luarocks_cltorch-scm-1-2332/cltorch/cltorch/src/torch/generic/Tensor.cpp:800
stack traceback:
        [C]: at 0x7f14a929cee0
        [C]: in function '__newindex'
        ...ty/Downloads/torch/install/share/lua/5.1/optim/lbfgs.lua:156: in function 'lbfgs'
        neural_style_opencl.lua:263: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670
hughperkins commented 9 years ago

Hmmm, interesting. It's because I removed the possibility of directly printing out values from a gpu buffer, since really slow. Will dig a bit...

hughperkins commented 9 years ago

Hmmm, might take a while for me to figure out. What if you use ADAM instead?

napsternxg commented 9 years ago

Ok now I am getting a different error. Probably something in nn module.

/home/username/Downloads/torch/install/bin/luajit: .../Downloads/torch/install/share/lua/5.1/nn/Sequential.lua:44: bad argument #1 to 'updateOutput' (3D or 4D (batch mode) tensor is expected)
stack traceback:
        [C]: in function 'updateOutput'
        .../Downloads/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
        neural_style_opencl.lua:243: in function 'opfunc'
        ...ity/Downloads/torch/install/share/lua/5.1/optim/adam.lua:33: in function 'adam'
        neural_style_opencl.lua:267: in function 'main'
        neural_style_opencl.lua:424: in main chunk
        [C]: in function 'dofile'
        ...oads/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:131: in main chunk
        [C]: at 0x00406670
hughperkins commented 9 years ago

Hmmm. Can you open your installed Sequential.lua file, like gedit ~/torch/install/share/lua/5.1/nn/Sequential.lua, and hack the updateOutput function a bit, to look as follows please:

function Sequential:updateOutput(input)
   print('input:size()', input:size())
   local currentOutput = input
   for i=1,#self.modules do
      print('currentOutput:size()', currentOutput:size())
      print('self.modules[', i , ']=', self.modules[i])
      currentOutput = self.modules[i]:updateOutput(currentOutput)
   end
   self.output = currentOutput
   return currentOutput
end

This will give us a bit more information about what is arriving and so on. (To revert the change later, simply reinstall nn library).