Open ngphuoc opened 6 years ago
It seems to me that GPU convolution and recurrence are kernel specific. ArrayFire.jl has its own convolution function. Other basic functions can have its gradients by AutoGrad.
Does your support for CPU convolution and rnn will apply to any array type like CuArray or AFArray?
Yes, all functions that call cudnn or hand-written kernels in libknet8 are currently KnetArray specific. There is no reason for this, they should work with any raw CUDA pointer and dimension information, so we could write methods to support other array types. CPU code will not work because it has a for loop that goes through elements one at a time which would not be efficient with GPU arrays. I am waiting for julia 0.7 to come out when CUDAnative, GPUArrays, CLArrays, CuArrays will be supported out of the box (i.e. will not require a julia recompilation) before seriously looking into supporting multiple array types. However if you want to try it out, I can help you write conv4/pool methods for other array types.
On Wed, Dec 6, 2017 at 5:47 AM ngphuoc notifications@github.com wrote:
Does your support for CPU convolution and rnn will apply to any array type like CuArray or AFArray?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/denizyuret/Knet.jl/issues/213#issuecomment-349514148, or mute the thread https://github.com/notifications/unsubscribe-auth/ABvNpjAAtkvCewg724FbQcF8KVfiQiAVks5s9gBYgaJpZM4Q2SHa .
Thank a lot for your help. I am trying ArrayFire therefore it would be great if you could help with conv4/pool methods for ArrayFire. I'd like to try it for rnn too but if you don't have time, I may be able to follow your changes in conv4/pool and apply to rnn.jl.
I will take a look. Also related is #129 and #150.
I did some tests following test/karray.jl. AFArray inherits AbstractArray but it seems some array operations are not supported yet: https://github.com/JuliaComputing/ArrayFire.jl/issues/188
May I recommend duplicating conv.jl (or any other unimplemented op) and replacing KnetArray type with AFArray type. The @cuda calls should work if pointer(::AFArray) is defined correctly.
On Thu, Dec 7, 2017, 06:44 ngphuoc notifications@github.com wrote:
I did some tests following test/karray.jl. AFArray inherits AbstractArray but it seems some array operations are not supported yet: JuliaComputing/ArrayFire.jl#188 https://github.com/JuliaComputing/ArrayFire.jl/issues/188
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/denizyuret/Knet.jl/issues/213#issuecomment-349854279, or mute the thread https://github.com/notifications/unsubscribe-auth/ABvNpns260q4A1P10dyuRaIoVn2_EJt0ks5s918MgaJpZM4Q2SHa .
I replaced all KnetArray with AFArray, and add the following to the top of src/conv.jl. Then I rebuilt Knet. The build went OK.
using ArrayFire
# https://github.com/JuliaComputing/ArrayFire.jl/issues/189
# get_device_ptr, device_array, lock_device_ptr, unlock_device_ptr,
pointer{T}(a::AFArray{T})=convert(Ptr{T}, get_device_ptr(a))
pointer{T}(a::AFArray{T},i)=convert(Ptr{T}, get_device_ptr(a) + (i-1)*sizeof(T))
Base.similar(x::AFArray, s::Tuple) = AFArray(similar(Array(x),s))
AFArray{T}(len::Integer) where T = AFArray(Array{T}(len))
...
When I ran this example:
using Knet,ArrayFire
atype = Array{Float32}
gtype = AFArray
#= gtype = KnetArray =#
w = xavier(5,5,1,20) |> atype |> gtype
x = rand(28,28,1,128) |> atype |> gtype
conv4(w,x;padding=0)
I got the error
ERROR: conversion to pointer not defined for ArrayFire.AFArray{Float32,4}
If I add this line
unsafe_convert{T}(::Type{Ptr{T}}, a::AFArray) = pointer(a)
it caused segfault at line at build
@primitive conv4(w,x; o...),dy conv4w(w,x,dy;o...) conv4x(w,x,dy;o...)
I am trying example in the example directory of Knet with ArrayFire.jl. So far I got housing.jl and mnist.jl working. For lenet.jl, I got the error below. Is convolution tied to only KnetArray?:
Below is the simplified version of lenet.jl I used: