JuliaGaussianProcesses / KernelFunctions.jl

Julia package for kernel functions for machine learning
https://juliagaussianprocesses.github.io/KernelFunctions.jl/stable/
MIT License
267 stars 32 forks source link

Add LazyKernelMatrix and lazykernelmatrix #515

Open sethaxen opened 1 year ago

sethaxen commented 1 year ago

Summary

This PR introduces functionality for lazily representing kernel matrices, which is necessary when the matrix might be too large to store in memory. Fixes #514

Proposed changes

What alternatives have you considered?

To-Do

codecov[bot] commented 1 year ago

Codecov Report

Patch coverage has no change and project coverage change: -6.52 :warning:

Comparison is base (ef6d459) 94.16% compared to head (4d7d0b2) 87.64%.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #515 +/- ## ========================================== - Coverage 94.16% 87.64% -6.52% ========================================== Files 52 53 +1 Lines 1387 1433 +46 ========================================== - Hits 1306 1256 -50 - Misses 81 177 +96 ``` | [Impacted Files](https://app.codecov.io/gh/JuliaGaussianProcesses/KernelFunctions.jl/pull/515?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaGaussianProcesses) | Coverage Δ | | |---|---|---| | [src/KernelFunctions.jl](https://app.codecov.io/gh/JuliaGaussianProcesses/KernelFunctions.jl/pull/515?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaGaussianProcesses#diff-c3JjL0tlcm5lbEZ1bmN0aW9ucy5qbA==) | `100.00% <ø> (ø)` | | | [src/matrix/kernelmatrix.jl](https://app.codecov.io/gh/JuliaGaussianProcesses/KernelFunctions.jl/pull/515?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaGaussianProcesses#diff-c3JjL21hdHJpeC9rZXJuZWxtYXRyaXguamw=) | `100.00% <ø> (ø)` | | | [src/matrix/lazykernelmatrix.jl](https://app.codecov.io/gh/JuliaGaussianProcesses/KernelFunctions.jl/pull/515?src=pr&el=tree&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaGaussianProcesses#diff-c3JjL21hdHJpeC9sYXp5a2VybmVsbWF0cml4Lmps) | `0.00% <0.00%> (ø)` | | ... and [9 files with indirect coverage changes](https://app.codecov.io/gh/JuliaGaussianProcesses/KernelFunctions.jl/pull/515/indirect-changes?src=pr&el=tree-more&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=JuliaGaussianProcesses)

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.

devmotion commented 1 year ago

this could allow even more structured matrix types to be defined for specific kernel types and returned in the future

Yes, I imagine this would be useful also in the context of https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/issues/93#issuecomment-820262209. Though there maybe often the non-lazy version might be sufficient (or even preferable).

theogf commented 1 year ago

It looks really nice already! Only thing which is a bit uneasy is the output type... I am not sure there is a strong guarantee that the first evaluated type would correspond to the rest of the matrix. But I also don't see how it could be solved easily...

FelixBenning commented 1 year ago

Relevant Part

For this a lazy ProductArray would also be a neat abstraction:

julia> v = rand(3)
3-element Vector{Float64}:
 0.417571623820013
 0.39972694171008405
 0.9970727095536318

julia> productArray(v,v)
3×3 ProductArrays.ProductArray{Tuple{Vector{Float64}, Vector{Float64}}, Tuple{Float64, Float64}, 2}:
 (0.417572, 0.417572)  (0.417572, 0.399727)  (0.417572, 0.997073)
 (0.399727, 0.417572)  (0.399727, 0.399727)  (0.399727, 0.997073)
 (0.997073, 0.417572)  (0.997073, 0.399727)  (0.997073, 0.997073)

now the lazy kernelmatrix is simply a lazy mappedarray of this product array. As mappedarray has a field for its mapping:

struct ReadonlyMappedArray{T,N,A<:AbstractArray,F} <: AbstractMappedArray{T,N}
    f::F
    data::A
end

you can write custom code for when F<:Kernel. This would allow you to add + as you do here

Synergy tangent

This is neat, because the productArray abstraction is also helpful as a multioutput abstraction:

julia> vec(productArray(v,1:2))
6-element reshape(::ProductArrays.ProductArray{Tuple{Vector{Float64}, UnitRange{Int64}}, Tuple{Float64, Int64}, 2}, 6) with eltype Tuple{Float64, Int64}:
 (0.417571623820013, 1)
 (0.39972694171008405, 1)
 (0.9970727095536318, 1)
 (0.417571623820013, 2)
 (0.39972694171008405, 2)
 (0.9970727095536318, 2)

cf. https://github.com/lazyLibraries/ProductArrays.jl (not yet a package https://github.com/JuliaRegistries/General/pull/84683)

devmotion commented 1 year ago

I have the same feeling as @sethaxen:

I considered using one of the many existing array types in the ecosystem for lazily representing an array, but defining a novel type allows us to do things like perform scalar multiplication or add kernel matrices without densifying the array.

A separate dedicated type provides more information about the context and allows us to define dispatches, special operations and optimizations that might be less relevant or not well defined in the general case.

FelixBenning commented 1 year ago

A separate dedicated type provides more information about the context and allows us to define dispatches, special operations and optimizations that might be less relevant or not well defined in the general case.

I edited my reply to explain how you can still do special dispatches

FelixBenning commented 1 year ago

I actually found this pull request because I wanted to ask: does it ever make sense for kernelmatrix to be eager? I mean memory access is expensive and you take $n^2$ space instead of the natural $n$ for just the vector. And kernelmatrices are often only accessed once (e.g. when you want take its cholesky decomposition) and afterwards tossed. Are some kernels really so expensive to evaluate to justify this?

FelixBenning commented 1 year ago

The Transform test seem very much broken to me, it breaks here, #511 and #510 in pretty much the same way. And those have nothing to do with each other.

sethaxen commented 1 year ago

Only thing which is a bit uneasy is the output type... I am not sure there is a strong guarantee that the first evaluated type would correspond to the rest of the matrix. But I also don't see how it could be solved easily...

I wonder of it's safe to ask Julia to infer the return type of kernelmatrix and use that to infer the eltype.

FelixBenning commented 1 year ago

Only thing which is a bit uneasy is the output type... I am not sure there is a strong guarantee that the first evaluated type would correspond to the rest of the matrix. But I also don't see how it could be solved easily...

I wonder of it's safe to ask Julia to infer the return type of kernelmatrix and use that to infer the eltype.

mappedarrays does eltype inference 🤷

willtebbutt commented 1 year ago

A separate option would be to be a little less ambitious with this initial implementation, and not implement a new matrix type, and just provide an interface for the operation that we want.

Specifically, add the following function to the interface:

function kernel_matrix_vector_product(k::Kernel, x::AbstractVector, y::AbstractVector, v::AbstractVector{<:Real})
    return kernelmatrix(k, x, y) * v
end
kernel_matrix_vector_product(k::Kernel, x::AbstractVector, v::AbstractVector{<:Real}) = kernelmatrix(k, x) * v

where the above methods are the default implementations and specify the semantics. For large problems we could wrap the kernel in some other type which says "don't ever instantiate me", and implements a low-memory version of this operation.

The nice thing about doing things this way is that we avoid having to e.g. guess at the output type of kernelmatrix, or whatever. It's a little verbose, but maybe it covers our initial requirements?

Does this cover our needs? I guess really I'm just asking whether we actually need to go to the trouble of implementing a new matrix type, and can instead just implement a single function that can be overloaded. If there is a range of functionality that we require, then maybe a matrix type is needed, but if we really only have one operation in mind, maybe it's not?

edit: or we could add an additional argument to the functioon that says how things should be computed when you're doing block-wise operations. e.g. kernel_matrix_vector_product(::ChunkSize, ::Kernel, ::AbstractVector, ::AbstractVector{<:Real})

FelixBenning commented 1 year ago

ProductArrays v1.0.0 is now online, so you could do

using MappedArrays: mappedarray, ReadonlyMappedArray
using ProductArrays: productArray, ProductArray

struct Splat{T} 
      func::T
end

(s::Splat)(x) = s.func(x...)

lazykernelmatrix(k::Kernel, x, y) = mappedarray(Splat(k), productArray(x,y))

const LazyKernelMatrix{K<:Kernel, T<:Real, P<:ProductArray} = ReadonlyMappedArray{T, 2, P, Splat{K}}

and then implement additional functionality for LazyKernelmatrix (you would get a ton of functionality from MappedArrays for free (like Eltype inference)

julia> a = mappedarray(Splat(k), productArray(x,x))
3×3 mappedarray(Splat{SqExponentialKernel{Distances.Euclidean}}(Squared Exponential Kernel (metric = Distances.Euclidean(0.0))), ::ProductArray{Tuple{Vector{Float64}, Vector{Float64}}, Tuple{Float64, Float64}, 2}) with eltype Float64:
 1.0       0.879512  0.998656
 0.879512  1.0       0.901716
 0.998656  0.901716  1.0

julia> a isa LazyKernelMatrix
true

julia> eltype(a)
Float64

julia> a isa AbstractArray
true

julia> dump(a) # very readable structure
ReadonlyMappedArray{Float64, 2, ProductArray{Tuple{Vector{Float64}, Vector{Float64}}, Tuple{Float64, Float64}, 2}, Splat{SqExponentialKernel{Distances.Euclidean}}}
  f: Splat{SqExponentialKernel{Distances.Euclidean}}
    func: SqExponentialKernel{Distances.Euclidean}
      metric: Distances.Euclidean
        thresh: Float64 0.0
  data: ProductArray{Tuple{Vector{Float64}, Vector{Float64}}, Tuple{Float64, Float64}, 2}
    prodIt: Base.Iterators.ProductIterator{Tuple{Vector{Float64}, Vector{Float64}}}
      iterators: Tuple{Vector{Float64}, Vector{Float64}}
        1: Array{Float64}((3,)) [0.6995475228324576, 0.19281640447854786, 0.6476909300916908]
        2: Array{Float64}((3,)) [0.6995475228324576, 0.19281640447854786, 0.6476909300916908]

if you want I can also write a convenience method to skip prodIt and get the underlying iterators straight from the ProductArray wrapper around ProductIterator.