Closed dionysio closed 8 years ago
Hi dionyslo,
(At a minimum, please can you provide a commandline, or sequence of commandlines, that I can run from the CNNMRF repo, to reproduce the issue please?)
The 'invalid workgroup size' problem was fixed.
Command line arguments are not implemented yet, try run_trans.th with these parameters inside the file:
{'potrait1', 'picasso', 'image', 384, 3, {100, 100, 100}, {12, 21}, {1e-4, 1e-4}, {3, 3}, 1, 1, {2, 2}, {2, 2}, {0, 0}, {23}, 2e1, 1e-3, 'speed', 256, 16, 'clnn'}
It seems 'invalid workgroup size' was fixed.
You mean, no issue remaining? Or there is an additional issue that needs to be addressed?
Sorry for the confusion, that was only part of the issue. It still crashes for me with the parameters I posted and the strack trace is:
Implementing mrf layers ...
THClReduceAll.cl build log:
"/tmp/OCLX7HnZ5.cl", line 9: warning: variable "in1" was declared but never
referenced
float *in1 = &_in1;
^
"/tmp/OCLX7HnZ5.cl", line 10: warning: variable "out" was declared but never
referenced
float *out = &_out;
^
/home/dio/torch/install/bin/luajit: Error: copyTo failed with -8 at /tmp/luarocks_cltorch-scm-1-5115/cltorch/cltorch/src/lib/THClTensorCopy.cpp:162
stack traceback:
[C]: at 0x7fb267d80020
[C]: in function '__newindex'
...eDrive/Documents/Python/animatronio/CNNMRF/mylib/mrf.lua:177: in function 'updateGradInput'
/home/dio/torch/install/share/lua/5.1/nn/Module.lua:30: in function 'backward'
/home/dio/torch/install/share/lua/5.1/nn/Sequential.lua:84: in function 'backward'
./transfer_CNNMRF_wrapper.lua:378: in function 'opfunc'
...ocuments/Python/animatronio/CNNMRF/mylib/myoptimizer.lua:34: in function 'mylbfgs'
./transfer_CNNMRF_wrapper.lua:548: in function 'main'
./transfer_CNNMRF_wrapper.lua:592: in function 'state'
run_trans.lua:82: in main chunk
[C]: in function 'dofile'
.../dio/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:145: in main chunk
[C]: at 0x00405d70
Ok
gpuinfo
or clinfo
and provide the output pleaseOn an Amazon EC2 K520, runs ok for me:
mrf_layers:
14
24
network has been built.
*****************************************************
Synthesis started at resolution 1
*****************************************************
Implementing mrf layers ...
Iteration 10, 100
Iteration 20, 100
Iteration 30, 100
Iteration 40, 100
Iteration 50, 100
Iteration 60, 100
Iteration 70, 100
Iteration 80, 100
Iteration 90, 100
Iteration 100, 100
<mylbfgs> reached max number of iterations
Synthesis finished at resolution 1, 10.074316978455 seconds
*****************************************************
Synthesis started at resolution 2
*****************************************************
Implementing mrf layers ...
Iteration 10, 100
Iteration 20, 100
Iteration 30, 100
Iteration 40, 100
Iteration 50, 100
A K520 is a kind of old-ish GPU, with 4GB memory. Is it possible you are trying to run this on an old-ish GPU, which is under-dimensioned for this model? I think the next step is to get the output of gpuinfo
and/or clinfo
, and take a look at that.
clinfo output:
Number of platforms: 1
Platform Profile: FULL_PROFILE
Platform Version: OpenCL 2.0 AMD-APP (1800.11)
Platform Name: AMD Accelerated Parallel Processing
Platform Vendor: Advanced Micro Devices, Inc.
Platform Extensions: cl_khr_icd cl_amd_event_callback cl_amd_offline_devices
Platform Name: AMD Accelerated Parallel Processing
Number of devices: 2
Device Type: CL_DEVICE_TYPE_GPU
Vendor ID: 1002h
Board name: AMD Radeon HD 7900 Series
Device Topology: PCI[ B#1, D#0, F#0 ]
Max compute units: 28
Max work items dimensions: 3
Max work items[0]: 256
Max work items[1]: 256
Max work items[2]: 256
Max work group size: 256
Preferred vector width char: 4
Preferred vector width short: 2
Preferred vector width int: 1
Preferred vector width long: 1
Preferred vector width float: 1
Preferred vector width double: 1
Native vector width char: 4
Native vector width short: 2
Native vector width int: 1
Native vector width long: 1
Native vector width float: 1
Native vector width double: 1
Max clock frequency: 940Mhz
Address bits: 32
Max memory allocation: 2200436736
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 8
Max image 2D width: 16384
Max image 2D height: 16384
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 16
Max size of kernel argument: 1024
Alignment (bits) of base address: 2048
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: No
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 16384
Global memory size: 3012558848
Constant buffer size: 65536
Max number of constant args: 8
Local memory type: Scratchpad
Local memory size: 32768
Max pipe arguments: 0
Max pipe active reservations: 0
Max pipe packet size: 0
Max global variable size: 0
Max global variable preferred total size: 0
Max read/write image args: 0
Max on device events: 0
Queue on device max size: 0
Max on device queues: 0
Queue on device preferred size: 0
SVM capabilities:
Coarse grain buffer: No
Fine grain buffer: No
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 64
Error correction support: 0
Unified memory for Host and Device: 0
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: No
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: No
Profiling : No
Platform ID: 0x7fa35922b430
Name: Tahiti
Vendor: Advanced Micro Devices, Inc.
Device OpenCL C version: OpenCL C 1.2
Driver version: 1800.11 (VM)
Profile: FULL_PROFILE
Version: OpenCL 1.2 AMD-APP (1800.11)
Extensions: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_atomic_counters_32 cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_image2d_from_buffer cl_khr_spir cl_khr_gl_event
Device Type: CL_DEVICE_TYPE_CPU
Vendor ID: 1002h
Board name:
Max compute units: 4
Max work items dimensions: 3
Max work items[0]: 1024
Max work items[1]: 1024
Max work items[2]: 1024
Max work group size: 1024
Preferred vector width char: 16
Preferred vector width short: 8
Preferred vector width int: 4
Preferred vector width long: 2
Preferred vector width float: 8
Preferred vector width double: 4
Native vector width char: 16
Native vector width short: 8
Native vector width int: 4
Native vector width long: 2
Native vector width float: 8
Native vector width double: 4
Max clock frequency: 3619Mhz
Address bits: 64
Max memory allocation: 3134081024
Image support: Yes
Max number of images read arguments: 128
Max number of images write arguments: 64
Max image 2D width: 8192
Max image 2D height: 8192
Max image 3D width: 2048
Max image 3D height: 2048
Max image 3D depth: 2048
Max samplers within kernel: 16
Max size of kernel argument: 4096
Alignment (bits) of base address: 1024
Minimum alignment (bytes) for any datatype: 128
Single precision floating point capability
Denorms: Yes
Quiet NaNs: Yes
Round to nearest even: Yes
Round to zero: Yes
Round to +ve and infinity: Yes
IEEE754-2008 fused multiply-add: Yes
Cache type: Read/Write
Cache line size: 64
Cache size: 32768
Global memory size: 12536324096
Constant buffer size: 65536
Max number of constant args: 8
Local memory type: Global
Local memory size: 32768
Max pipe arguments: 16
Max pipe active reservations: 16
Max pipe packet size: 3134081024
Max global variable size: 1879048192
Max global variable preferred total size: 1879048192
Max read/write image args: 64
Max on device events: 0
Queue on device max size: 0
Max on device queues: 0
Queue on device preferred size: 0
SVM capabilities:
Coarse grain buffer: No
Fine grain buffer: No
Fine grain system: No
Atomics: No
Preferred platform atomic alignment: 0
Preferred global atomic alignment: 0
Preferred local atomic alignment: 0
Kernel Preferred work group size multiple: 1
Error correction support: 0
Unified memory for Host and Device: 1
Profiling timer resolution: 1
Device endianess: Little
Available: Yes
Compiler available: Yes
Execution capabilities:
Execute OpenCL kernels: Yes
Execute native function: Yes
Queue on Host properties:
Out-of-Order: No
Profiling : Yes
Queue on Device properties:
Out-of-Order: No
Profiling : No
Platform ID: 0x7fa35922b430
Name: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
Vendor: GenuineIntel
Device OpenCL C version: OpenCL C 1.2
Driver version: 1800.11 (sse2,avx)
Profile: FULL_PROFILE
Version: OpenCL 1.2 AMD-APP (1800.11)
Extensions: cl_khr_fp64 cl_amd_fp64 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_gl_sharing cl_ext_device_fission cl_amd_device_attribute_query cl_amd_vec3 cl_amd_printf cl_amd_media_ops cl_amd_media_ops2 cl_amd_popcnt cl_khr_spir cl_khr_gl_event
Ok. Looks reasonable. Hmmm.
Well, error -8 is:
-8 CL_MEM_COPY_OVERLAP clEnqueueCopyBuffer, clEnqueueCopyBufferRect, clEnqueueCopyImage
if src_buffer and dst_buffer are the same buffer or subbuffer object and the source and destination regions overlap or if src_buffer and dst_buffer are different sub-buffers of the same associated buffer object and they overlap. The regions overlap if src_offset ≤ to dst_offset ≤ to src_offset + size – 1, or if dst_offset ≤ to src_offset ≤ to dst_offset + size – 1
I wonder if it's because it's copying from one tensor to the same tensor, and copy doesnt like that much. Maybe I will change to not use copy when that is the case. Unfortunately I have no way of testing if this will fix the problem, so I wll make the change, and you can reinstall, and retry, ok?
Hmmm, seems that 'fix' in a commit is a magic word the automatically closes the issue :-P
I've added a potential fix int othe code. Can you redo luarocks make cltorch
, and see if this fixes the issue?
It actually does! Yay, thank you for your help.
Cooool! :-)
Hello, I encounter this error message in CNNMRF project and I referenced the issue over here.
Could you tell me what the error code means? I have seen you around similar projects and I thought you might be willing to help.