FluxML / Flux.jl

Relax! Flux is the ML library that doesn't make you tensor
https://fluxml.ai/
Other
4.48k stars 606 forks source link

Does Flux support N-dimensional convolutions, where N>3? #451

Open bhvieira opened 6 years ago

bhvieira commented 6 years ago

While this works:

Conv((1,1,1), 1=>1)(rand(1,1,1,1,1)) #3D convolution

This throws an error:

Conv((1,1,1,1), 1=>1)(rand(1,1,1,1,1,1)) #4D convolution
#ERROR: MethodError: no method matching conv!(::Array{Float64,6}, ::Array{Float64,6}, ::Array{Float64,6}; pad=(0, 0, 0, 0), stride=(1, 1, 1, 1), dilation=(1, 1, 1, 1))

I'm running Julia 0.7 and:

[587475ba] Flux v0.6.7+ #master (https://github.com/FluxML/Flux.jl.git)
[872c559c] NNlib v0.4.2+ #master (https://github.com/FluxML/NNlib.jl.git)
MikeInnes commented 6 years ago

Not right now; each of 1-, 2- and 3-D convolutions just calls out to a specific kernel. But I'd happily take an implementation of an N-D convolution in NNlib, even if it's not heavily optimised.

bhvieira commented 6 years ago

@MikeInnes Right, thanks for the answer, I might look into it. I have 3D timeseries (fMRI volumes), and I did not want to go overboard with a 3D convolutional RNN, so I thought a simple 4D CNN (with kernel size 1 in the time dimension) could be a better idea. Do you want to keep the issue open as a feature request or should I close it?

MikeInnes commented 5 years ago

Yeah, we may as well keep this open, thanks.

shreyas-kowshik commented 5 years ago

I would like to work on this issue. Can someone please guide me as to what needs to be done?

MikeInnes commented 5 years ago

Might be best to speak to @avik-pal. On the GPU we're limited by what NVIDIA provides in CUDNN, but it should be straightforward to write an N-dimensional CPU kernel.

avik-pal commented 5 years ago

Unfortunately, CUDNN only provides 2D and 3D convolutions.

shreyas-kowshik commented 5 years ago

@avik-pal So should I go ahead with the CPU part? And if yes can you please guide me as to where should I start looking into the code?

avik-pal commented 5 years ago

@shreyas-kowshik you will first have to add it in NNlib. Here are the 2d and 3d implementations.

datnamer commented 5 years ago

Is it not possible to write a nd conv on the GPU?

avik-pal commented 5 years ago

@datnamer It is definitely possible. We need to write the CUDA kernel with CUDAnative and get it integrated with the Flux API. However, making the kernel efficient is not as straightforward as optimizing the CPU kernel.

datnamer commented 5 years ago

Understood

shreyas-kowshik commented 5 years ago

I was going through some code in NNlib.jl when I came across this : https://github.com/FluxML/NNlib.jl/pull/31

The PR effectively adds N-dimensional Conv to NNlib.jl

cossio commented 5 years ago

Should this be closed since https://github.com/FluxML/NNlib.jl/pull/94 has been merged?

datnamer commented 5 years ago

does that implementation work on GPUs?

bhvieira commented 5 years ago

@cossio it doesn't appear to work though. What should be the syntax to get (4+)D convolutions working?

darsnack commented 3 years ago

I think this should no longer be an issue.

AriMKatz commented 3 years ago

Even on CUDA?

darsnack commented 3 years ago

Not sure. Will test tomorrow morning (or someone else can race me).

CarloLucibello commented 3 years ago

On cpu, we support only up to 3d convolutions https://github.com/FluxML/NNlib.jl/blob/master/src/conv.jl I suspect it is the same for CUDNN but I didn't check

darsnack commented 3 years ago

Okay then we definitely should keep this open.