stev47 / StaticKernels.jl

Julia-native non-allocating kernel operations on arrays
Other
25 stars 0 forks source link

Add an Option to Broadcast Along the 3rd Dimension #6

Open RoyiAvital opened 1 year ago

RoyiAvital commented 1 year ago

In many cases one want to apply the kernel to a multi channels matrix (RGB Image). Is there a way to do so with a single kernel to work on each channel?

stev47 commented 1 year ago

Are you searching for mapslices?

k = @kernel w -> w[1, 0] - w[0, 1]
img = rand(3, 10, 10)
mapslices(x -> map(k, x), img; dims = (2, 3))

It might not be optimal on performance however.

RoyiAvital commented 1 year ago

Well, currently what I do is looping over the 3rd dimension of a pre allocated area. I'm after a non allocating solution beside the explicit loop.

stev47 commented 1 year ago

You could just write a three-dimensional kernel (expanding from my previous example):

k = @kernel w -> w[0, 1, 0] - w[0, 0, 1]

There is currently no way to "lift" a kernel to a higher dimension. I'm open to contributions however.

RoyiAvital commented 1 year ago

I mean I'm doing something like:

for ii in 1:3
  @views map!(mK, mO[:, :, ii], extend(mI[:, :, ii], StaticKernels.ExtensionReplicate()));
end

It still allocates few KB's, but works good. By the way, any way to completely avoid any allocation?

stev47 commented 1 year ago

You can rewrite your kernel mK according to my previous example. Also I cannot reproduce your example: the following does not allocate for me on Julia 1.8.3

using BenchmarkTools
using StaticKernels

f!(mO, mK, mI) = begin
    for ii in 1:3
        @views map!(mK, mO[:, :, ii], extend(mI[:, :, ii], StaticKernels.ExtensionReplicate()));
    end
end

mK = @kernel w -> w[1, 1] - w[0, 0]
mI = rand(10, 10, 3)
mO = similar(mI)

@btime f!($mO, $mK, $mI)
RoyiAvital commented 1 year ago

Indeed, It seems to be a difference since I didn't wrap it as a function but only as a local block using begin.

I don't understand what you mean in your previous example. If you mean k = @kernel w -> w[0, 1, 0] - w[0, 0, 1], then if I get it right, it is not what I'm after. I am after mK = @kernel w -> w[1, 1] - w[0, 0] per each matrix / channel on itself.

In my point of view, the kernel as defined by mK = @kernel w -> w[1, 1] - w[0, 0] is a 2D operator, so, by the broadcasting rules, if I apply it on a 3D tensor I expect the broadcasting to basically apply the 2D operator per channel in our image which is represented by 3 channels (3 Matrices stacked on the 3rd dimension).

stev47 commented 1 year ago

I understand what you want to do, but you did not apply my example to your setting: If you have the remaining channel as last component it reads: k = @kernel w -> w[1, 1, 0] - w[0, 0, 0], i.e. it is a kernel which has size 1 in the third channel, therefore acting like normal broadcast/map on that channel.