tensorflow / probability

Probabilistic reasoning and statistical analysis in TensorFlow
https://www.tensorflow.org/probability/
Apache License 2.0
4.23k stars 1.09k forks source link

FeatureRequest: Adding Constant and WhiteNoise kernel in tfp.math.psd_kernels #852

Open tirthasheshpatel opened 4 years ago

tirthasheshpatel commented 4 years ago

Constant and WhiteNoise kernels haven't been implemented yet in the tfp.math.psd_kernels submodule. I am not sure if there is a way around to get these psd_kernels using currently implemented psd_kernels. But it would be really nice to see them implemented in tfp.

Kuurusch commented 4 years ago

Yea, would be nice to have at least a WhiteNoise-kernel but even better would be to have a kernel where one can choose the distribution e.g. uniform or beta-distribution!

leroidauphin commented 4 years ago

For the constant kernel, could you use the Linear kernel with the slope_variance set to zero? That would give you this kernel: k(x, y) = bias_variance**2

tirthasheshpatel commented 4 years ago

Oh yes! I didn't think of that. Don't you think that there would be a lot of redundant calculations for larger datasets? Maybe adding a special case in linear kernel would be better. What do you think?

On Thu, May 7, 2020, 2:24 AM leroidauphin notifications@github.com wrote:

For the constant kernel, could you use the Linear kernel with the slope_variance set to zero? That would give you this kernel: k(x, y) = bias_variance**2

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/tensorflow/probability/issues/852#issuecomment-624885485, or unsubscribe https://github.com/notifications/unsubscribe-auth/AKJOJRFLMMX5ORFB23D4I7DRQHE6TANCNFSM4LTEXCJQ .

leroidauphin commented 4 years ago

There will be additional calculations, yes. It looks like the minimum would be to compute the dot product between the x1 and x2 batches with the result being subsequently zeroed. It might be easier to modify the linear kernel to optimise this case.

leroidauphin commented 4 years ago

Regarding the white noise kernel, I wonder whether it is possible at all to implement it with the existing interface. The definition of the while noise kernel is k(x_i, x_j) = \sigma^2 \delta_ij. Implementations of this kernel (such as the scikit-learn implementation) achieve the delta function by allowing x_j to be None, which is interpreted as k(x_i, x_i). The current implementation here appears to always require both x_i and x_j to be specified, and I cannot find a way in the TensorFlow API to discover whether two tensors are actually the same object, as opposed to element-wise equal. This would suggest that it is not possible to add a white noise kernel without changing the interface of the PositiveDefiniteKernel, although I would be happy to learn I have misunderstood something!