Open martinjankowiak opened 3 years ago
Hi!
If you aim at having a kernel of the type k(x,y) = sigma * Matern52(X/lengthscale)
, ScaleTransform
will not work, you can either use ScaledKernel(kernel, sigma)
or more simply sigma * kernel
.
About the lack of updates of the parameters, this should obviously not happen :). I will check what's happening. But note that a more stable version is currently worked on #71 but is waiting on https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/issues/203 to be solved.
hello,
i'm confused about how to make sure that all the hyperparameters of my kernel are being learned.
in particular i would like to use a matern 5/2 kernel specified by
D+1
parameters (D
length scales and one kernel scale). to my understanding that can be specified as followshowever, if i use
Flux.params(kernel)
to inspect the kernel hyperparameters after training it seems the hyperparameters haven't been updated.note that i am using
SVGP
with the defaultoptimiser
so i would expect the hyperpararmeters to be updated. is this the wrong way to inspect the hyperparameters? do i need to do anything else to specify that i want the hyperparameters to be updated?thank you!