cornellius-gp / gpytorch

A highly efficient implementation of Gaussian Processes in PyTorch
MIT License
3.57k stars 562 forks source link

[Feature Request] Verbose Settings #1526

Open syncrostone opened 3 years ago

syncrostone commented 3 years ago

🚀 Feature Request

Add a setting where, if turned on, the current state for all settings is printed. If settings are updated anywhere in processing, the changed setting is printed out again.

Motivation

Is your feature request related to a problem? Please describe. This is a really great package with lots of complexity that makes it very powerful, but as a result, it can also be difficult to validate my code.

To validate and understand my code, I want to know what is happening behind the scenes. This is hard to do when it's not clear what settings are set where. I end up digging through the source code to figure it out, and it's easy to get lost and hard to record and remember what I find. I just discovered settings.verbose_linalg, which helps but does not solve everything.

Ideally, I would like a verbose_settings setting that prints even more information than verbose_linalg.

Some examples of things I would like to have printed when running but that don't print with settings.verbose_linalg:

Pitch

Describe the solution you'd like Add another layer of verbosity in settings that is tiresomely verbose, way more so than verbose_linalg.

Describe alternatives you've considered Better documentation of defaults in gpytorch.settings (to at least match the defaults), but this still doesn't solve the issue when settings are changed within the code, as they are with certain LazyTensors (e.g. this is the first example I found searching for settings being modified internally)

gpleiss commented 3 years ago

@syncrostone I think this is a good idea. Some thoughts:

Preconditioner size:

This is sort of printed out in verbose_linalg mode: ("Running Pivoted Cholesky on a {matrix.shape} RHS for {max_iter} iterations.") But I see that it could be useful to print out exactly what preconditioner is being used - in a non-linalg-specific verbose mode.

Whether memory_efficient Toeplitz math is being used.

This could be added to the verbose_linalg mode.

Whether Lazy evaluate of kernels is on

I'm trying to think what category of verbosity this falls under. Should there a more general lazy_evaluation verbosity mode?

Better documentation of defaults in gpytorch.settings (to at least match the defaults)

Can you open up a separate issue for this?

syncrostone commented 3 years ago

@syncrostone I think this is a good idea. Some thoughts:

Preconditioner size:

This is sort of printed out in verbose_linalg mode: ("Running Pivoted Cholesky on a {matrix.shape} RHS for {max_iter} iterations.") But I see that it could be useful to print out exactly what preconditioner is being used - in a non-linalg-specific verbose mode.

This is good to know, but it would be good to have it specified that this is the preconditioner setting in particular controlling this, otherwise it's hard to debug without already knowing about the connection between the preconditioning and use of Pivoted Cholesky. Now that I know this however, it's much easier for me to figure out the preconditioner myself.

An easy thing to do would be to add this information to the documentation, but even better would be to have it printed out in a verbose mode.

Whether memory_efficient Toeplitz math is being used.

This could be added to the verbose_linalg mode. That would be great.

Whether Lazy evaluate of kernels is on

I'm trying to think what category of verbosity this falls under. Should there a more general lazy_evaluation verbosity mode?

Yes, this would be hugely helpful. This may actually be where most of my frustration lies. I'm trying to use the lazy tensors, but I don't know what operations end up being done, and just have to trust it's working optimally. I've been working with Kronecker structured tensors, and there have been recent improvements to that, but I wouldn't have known that the changes made in the improvements weren't already happening prior without digging into the source code.

For example I have something that is represented in kronecker structure, where is the identity matrix and I need to figure out whether vecotr products with this kronecker structured matrix are being computed as T products with C, or as a larger product with the whole Kronecker structured matrix.

Better documentation of defaults in gpytorch.settings (to at least match the defaults)

Can you open up a separate issue for this?

Done! #1533

Balandat commented 3 years ago

For example I have something that is represented in kronecker structure [...] and I need to figure out whether vecotr products with this kronecker structured matrix are being computed as T products with C, or as a larger product with the whole Kronecker structured matrix.

To this specific question the answer is "the former", the relevant code is here: https://github.com/cornellius-gp/gpytorch/blob/master/gpytorch/lazy/kronecker_product_lazy_tensor.py#L33-L44