Closed ambigeV closed 9 months ago
Not sure I fully understand, do you mean you want the task correlation matrix C
from the index kernel rather than the task covariance matrix K
(that includes the different scaling factors)? In that case just compute C = diag(K)^{-1/2} @ K @ diag(K)^{-1/2}
Thanks for kind answer, but I felt like according to the approximation computation of this kernel, sometimes this task correlation matrix C cannot showcase the true correlation among tasks, isn't it? Say, we have 4 tasks and so the matrix C is 4 by 4, even if a indicator in C_ij have a value close to 1 doesn't necessarily these two tasks are highly correlated.
even if a indicator in C_ij have a value close to 1 doesn't necessarily these two tasks are highly correlated.
hmm so I guess it does mean that - according to the model and as inferred from the data - the tasks are highly correlated. Whether the tasks are actually correlated or whether the correlation structure may be incorrectly inferred due to spurious correlations is a different story (and would be solved by a more more data, a stronger prior over the correlation structure, or a more parsimonious model)..
even if a indicator in C_ij have a value close to 1 doesn't necessarily these two tasks are highly correlated.
hmm so I guess it does mean that - according to the model and as inferred from the data - the tasks are highly correlated. Whether the tasks are actually correlated or whether the correlation structure may be incorrectly inferred due to spurious correlations is a different story (and would be solved by a more more data, a stronger prior over the correlation structure, or a more parsimonious model)..
Thanks for the reply. Indeed, that story exceeds the scope of the current model assumption.
Hi, I noticed that in Gpytorch, the scale kernel is often omitted and somehow index kernel is utilized to capture the task similarity and the scale simultaneously. In this regard, is there any approach to fetch the task similarity from index kernel directly? Thx.