karenyyng / shear_gp

code for using GP for inferring the 2D lensing potential. Accompanying gaussian process repo is:
https://github.com/karenyyng/george
1 stars 0 forks source link

Create a 'composite kernel' class for (kappa, g1, g1, [lens potential]) #7

Closed mdschneider closed 8 years ago

mdschneider commented 9 years ago

I want to use the george Solver class for sampling and density evaluation, but it requires a Kernel class instance as an argument to the constructor. For joint modeling of the shear, convergence, and (optionally) the lens potential I need a composite kernel that encompasses each of these fields and their cross-covariances.

karenyyng commented 9 years ago

Do you think we should do this wrapper class at the C++ level or at the Python level?

mdschneider commented 9 years ago

I’d prefer to do it in the C++ level because that’s all I’m using at the moment to interface with my C++ implementation of the Thresher.

On August 27, 2015 at 1:30:37 PM, Karen Ng (notifications@github.com) wrote:

Do you think we should do this wrapper class at the C++ level or at the Python level?

— Reply to this email directly or view it on GitHub.

karenyyng commented 9 years ago

If we do not have any plans to use the Python / Cython interface it should not be hard to write. We have to be more careful if we wish to use it via Python one day.

mdschneider commented 9 years ago

That's good for now then. I wasn't sure if it makes sense to derive from DerivativeExpSquaredKernel and then instantiate member objects of the KappaKappa etc. classes inside this wrapper. Or if there were some other better organization.

Also, it's worth considering perhaps if this is really necessary. Besides, being a little bit cleaner, I wanted to use the Solver class because it uses the Eigen library, which has routines for sparse matrices. So this could be useful for addressing Issue #5 as well.

karenyyng commented 9 years ago

I will have to look at how the Solver class works and get back to you.

karenyyng commented 9 years ago

Yeah I think we do not have to derive from DerivativeExpSquaredKernel class for this new class. It can just be a class that instantiates several member objects of DerivativeExpSquareKernel class for the relevant physical quantities. It should also have methods to call the methods of the member objects once we supply a data matrix. The composite kernel can be an array of pointers to the kernel matrices of the DerivativeExpSquareKernel. We should be able to implement this new class using a template syntax or so.

mdschneider commented 9 years ago

@karenyyng I'm not sure I understand all your comments.

First, I think the 'composite class' needs to be a sub-class of Kernel to match the expected argument type for the Solver constructor.

Second, what do you mean by a 'data matrix' to be passed to the composite class?

Third, the return value from the value method must be a matrix of the kappa, gamma1, gamma2 covariance at fixed x1, x2 values. Is this what you mean when you say an array of pointers for the composite kernel value?

Fourth, what types are to be templated in the class implementation?

karenyyng commented 9 years ago

@mdschneider

  1. Yes what needs to be solved by Solver will need to be a sub-class of Kernel. But do we want to solve the composite kernel matrix, i.e. a matrix with all the (kappakappa, gamma1gamma1, gamma2gamma2, lens_potential) or do we want to solve them piecewise, i.e. solve the covariance matrix of KappaKappa, then solve kappa gamma1, then gamma1gamma1 etc. ? I am also trying to figure out what we need to solve exactly.
  2. by data matrix, I mean the ellipticities values, i.e. a matrix with dimension : no_of_galaxies times the corresponding ellipticities
  3. if we use a 2D array to represent each of the matrices, we can get the pointer to each of the 2D array, and put them in an array to refer to them (i.e. array of pointers). If you use a vector instead, we will have to think about how to refer to each object in an alternative way.
  4. It depends on 1. 2. and 3. And this is the reason why developing in C++ will take more time than other languages because there are many things to consider and no unique way to do one job.
mdschneider commented 9 years ago

I'm not sure how we would 'solve' the components of the composite kernel individually because we have all the cross-covariances to worry about.

The rest is clear - thanks!

mdschneider commented 9 years ago

Although, since we know the block structure of the covariance, I suppose we could work out by hand the relevant inverses and conditional covariances to save some computation. This might be complicated to code up though?

mdschneider commented 9 years ago

I took a stab at addressing this issue here: https://github.com/karenyyng/george/tree/%237_shear_gp

mdschneider commented 8 years ago

This issue seems to be settled with the lens_fields.cpp implementation we're using.