open-gamma-ray-astro / gamma-astro-data-formats

Data formats for gamma-ray astronomy
https://gamma-astro-data-formats.readthedocs.io
Creative Commons Attribution 4.0 International
30 stars 27 forks source link

Comparison with astromodels #4

Open giacomov opened 8 years ago

giacomov commented 8 years ago

The format I am designing for astromodels to specify functions is very similar to what you are trying to achieve, so maybe we can try to converge. This is what I have:

powerlaw:

    description : 

        A simple power-law with normalization expressed as 
        a logarithm

    formula :

        expression : 10**logK * (x / piv)**index

        latex :

            \frac{dN}{dx} = 10^{logK}~\frac{x}{piv}^{index}

    parameters:

        logK :

            desc : Logarithm of normalization
            initial value : 0
            min : -40
            max : 40

        piv :

            desc : Pivot energy
            initial value : 1
            fix : yes

        index :

            desc : Photon index
            initial value : -2
            min : -10
            max : 10

powerlaw_flux:

    description:

        A simple power-law with the integral between two extremes as 
        normalization parameter

    formula:

        expression : 10**(logK) * (index + 1) * (x / piv)**index / 
                     ( xmin**(index+1) - xmax**(index+1) )

        latex :

            \frac{dN}{dx} = \frac{index + 1}{xmin^{index+1} - xmax^{index+1}}
                            10^{logK}~\frac{x}{piv}^{index} 

    parameters:

        logK :

            desc : Logarithm of normalization
            initial value : 0
            min : -40
            max : 40

        piv :

            desc : Pivot energy
            initial value : 1
            fix : yes

        index :

            desc : Photon index
            initial value : -2
            min : -10
            max : 10

        xmin :

            desc : Lower bound of integration
            initial value : 1

        xmax :

            desc : Upper bound of integration
            initial value : 10

This can easily be read from python as:


import yaml

with open("astromodels/functions/powerlaw.yaml","r") as f:
    d = yaml.load(f)

print(d['powerlaw'])
'A simple power-law with normalization expressed as a logarithm'

print(d['powerlaw']['parameters']['index'])
{'max': 10, 'initial value': -2, 'desc': 'Photon index', 'min': -10}

Maybe we can analyze our respective trials and merge the good things about both in one format.

joleroi commented 8 years ago

same question as in https://github.com/gammapy/gamma-astro-data-formats/issues/1 How can I use this format to store fit results? I.e. where do I store fitted values of parameters?

directly in d['powerlaw']['parameters']['index'] ? Or is this format just for defining the model?

zblz commented 8 years ago

@joleroi - up to now we have only had model definition formats, but it should definitely include at some point fit results. Probably a value and error keywords as a minimum. The error part is a bit difficult - other keywords should be used to specify its properties (confidence level, gaussian, asymetrical, etc) and probably have an option to store a posterior distribution histogram or samples. Some things have been done in astropy to add uncertainties to quantities, but they are quite spread out and I can't really get a good idea of it.

cdeil commented 8 years ago

Is it better to have one schema for model specification and fit results or two?

The model XML format in the Fermi Science Tools uses one schema for both. But then there's things like TS values and covariance matrix missing and need to be stored in separate files or added in ad-hoc places in the XML file without an agreement how it's stored.

It's not clear to me yet if there are significant pros and cons to the two possible approaches (or a mix with some extra info stored with the model spec, and some separated out).

@giacomov and all - do you have an opinion here? is someone willing to work out a proposal?

giacomov commented 8 years ago

I believe we need two schemas, one for model specification and one for saving results.

Actually, I think that a more helpful separation is between a schema for specifying functions and their parameters (as the one I discussed above), and one for specifying a complete likelihood model, made of any number of sources, with point-like or extended sources, each one with a arbitrary number of components and features.

I think that the first schema (model specification) should be dead simple, to allow users to implement their own models easily by interacting directly with it through their favorite text editor (or python).

The second schema instead must use functions defined with the first schema (by name) and favor flexibility, because it must accomodate all the needs that we have in terms of modeling. This second schema should also be able to store fit results.

My two cents, On Dec 7, 2015 3:54 AM, "Christoph Deil" notifications@github.com wrote:

Is it better to have one schema for model specification and fit results or two?

The model XML format in the Fermi Science Tools uses one schema for both. But then there's things like TS values and covariance matrix missing and need to be stored in separate files or added in ad-hoc places in the XML file without an agreement how it's stored.

It's not clear to me yet if there are significant pros and cons to the two possible approaches (or a mix with some extra info stored with the model spec, and some separated out).

@giacomov https://github.com/giacomov and all - do you have an opinion here? is someone willing to work out a proposal?

— Reply to this email directly or view it on GitHub https://github.com/gammapy/gamma-astro-data-formats/issues/4#issuecomment-162502299 .

cdeil commented 8 years ago

FYI: in the process of looking for a config file handling / schema validation package to use in Gammapy (see https://github.com/gammapy/gammapy/issues/401), I came across python-anyconfig. (There's a ton of similar packages, we haven't made the choice to use that one yet.)

It has functions and even a command line tool for easy schema generation and validation (see here) using jsonschema in the background. There's also schema.

I know the purpose of this repo is to agree on formats, not to provide codes. But providing schemas and some docs on how to use them is in scope, no?

giacomov commented 8 years ago

Hi, I found a little more time to work on astromodels, and I have now a quick tutorial which can be followed:

astromodels.readthedocs.org

It shows the basic functionality. More advanced features, such as time-dependent parameters or handling of polarization, are already kind of half-implemented but not mature enough for primetime.

Comments and contributions are very welcome.

cdeil commented 8 years ago

@giacomov - My main comment / concern is that I'm not a fan of astromodels as yet another general astro modeling package. Personally I prefer to build on Sherpa and Astropy and contribute missing features there.

We're still in the situation that there's zero code sharing between 3ML, Naima and Gammapy. @zblz - What do you think? Do you want to use astromodels in Naima?

giacomov commented 8 years ago

@cdeil I agree that it is not desirable to have 3 packages doing the same thing. However, the scope of astromodels is merely the definition and handling (loading, saving) of the model, there will be no fitting or sampling capability, so the overlap is limited.

I tried to get the sherpa code and modified it for my purposes, but I ended up needing changes too big for them to be acceptable for a mature and widely used package. It is also pretty difficult to implement these things without breaking things up, as the package is big.

With astropy the situation looks a little better because the modeling part is not so developed, but still I would need pretty big changes.

Also, both packages are quite a lot slower than what they could be (especially astropy). See here for a quick-and-dirty speed comparison:

https://gist.github.com/giacomov/aacc8feebc745717276a

If there is the possibility for working astromodels or part of it into one of those packages (more likely astropy than sherpa), I am definitely open to it.

Looking forward to out meeting this Friday.

cdeil commented 8 years ago

@giacomov - Thanks for your comment and for the open discussion!

The issue I have with the current way 3ML / astromodels is implemented is that it's a full multi-mission modeling / fitting package, with similar goals as Sherpa (see 2001SPIE.4477...76F on the multi-mission aspect and the Scipy2009 proceeding for the architecture).

I know Sherpa has issues and am aware that this is needed:

There are issues in the Sherpa Github tracker for all of them and this will all happen in the coming months.

Basically I think Sherpa is a huge piece of work, well done, actively maintained and now open development and very friendly to gamma-ray astronomers, and my strong preference would be that we add to it or built on top of it. At the core this means that we directly use or sub-class the Sherpa parameter and model classes, and not create a new parameter and base model class that is incompatible with the Sherpa framework, i.e. you can't fit the current astromodels with Sherpa.

I don't have a full vision what is needed to make Sherpa work for our applications (extra models, combined spatial/spectral models, model serialisation, ...?) But I'd like to try and see if we hit any fundamental issues for important use cases that Sherpa can't handle.

So @giacomov - maybe a good first step would be to define one or very few analysis use-cases that I try to implement with Sherpa and you with the current 3ML / astromodels. Worst case is that we go separate ways and we'll still have a nice cross-check of the tools. Best case we agree and collaborate a lot in the coming months instead of duplicating effort.

tibaldo commented 8 years ago

@cdeil @giacomov thank you for the discussion. I found the point raised by @giacomov on speed quite relevant: would there be a way to improve the sherpa modeling package for speed in a way acceptable for current sherpa users and the main developers? To me, however, the main question remains: is there a way to plug into sherpa info from a dataset remaining completely agnostic about how the data format is and the data are treated, i.e., just by providing a likelihood value for the dataset given a certain model from external software? This capability in my view is the key of the flexibility/easiness in extending to new/different datasets in 3ML. I would define the test use case to investigate this particular aspect. @cdeil: any suggestions on how to build such test case concretely?

cdeil commented 8 years ago

I found the point raised by @giacomov on speed quite relevant: would there be a way to improve the sherpa modeling package for speed in a way acceptable for current sherpa users and the main developers?

I think the speed is just determined by how fast your model evaluate is implemented. Maybe we'll have to re-implement Sherpa morphology or spectral models to be faster. Or because we like to have e.g. sky lon / lat as fit parameters instead of x / y.

Note that even if we don't use any of the built-in Sherpa models, we can still use the Sherpa framework, i.e. implement the models as classes that sub-class ArithmeticModel and use the Sherpa Parameter class. A big advantage remains -- The powerful and established Sherpa modeling language to construct complex models (e.g. background + signal * absorption_line), all the Sherpa dataset, optimizer, fit statistic and error estimator algorithms and the interactive Sherpa fitting environment that many X-ray and gamma-ray astronomers like.

To me, however, the main question remains: is there a way to plug into sherpa info from a dataset remaining completely agnostic about how the data format is and the data are treated, i.e., just by providing a likelihood value for the dataset given a certain model from external software? This capability in my view is the key of the flexibility/easiness in extending to new/different datasets in 3ML. I would define the test use case to investigate this particular aspect. @cdeil: any suggestions on how to build such test case concretely?

I need to investigate ... depending on what exactly you want to do it might require implementing a new Data subclass (see https://github.com/sherpa/sherpa/blob/master/sherpa/data.py#L163) and / or Stat subclass (see https://github.com/sherpa/sherpa/blob/master/sherpa/stats/__init__.py#L48) or if convolution is involved a ConvolutionModel subclass (see https://github.com/sherpa/sherpa/blob/master/sherpa/instrument.py#L36).

Not sure if I have time to experiment this week ... I'll try. @giacomov or @tibaldo, if you already have something or have time to have a look, that would be great of course.

cdeil commented 8 years ago

I don't have time to look into Sherpa before our call on Friday. So for now I've just asked here: https://github.com/sherpa/sherpa/issues/184

cdeil commented 8 years ago

Some updates:

@giacomov - It would be very valuable to have a similar example script that represents some important use case for you and does some analysis with astromodels / 3ML (e.g. the time-dependent modeling you mentioned). Then we could check if we can or can't nicely implement that using Sherpa.

cdeil commented 8 years ago

There's discussion here at PyAstro16 how to move forward with modeling for astronomers.

We're now starting to collect notes and use cases (preferably, but not necessarily with working code) here: https://github.com/astropy/astropy-model-ideas

@giacomov or anyone else interested in modeling: we should continue the discussion there via pull requests and issues. This format spec and serialisation is important, but the scope of the discussion here has extended way beyond what's appropriate for gamma-astro-data-formats.

E.g. https://gist.github.com/giacomov/aacc8feebc745717276a would be an interesting contribution. (I don't think the speed comparison is done correctly, in reality the overhead is lower because Astropy and Sherpa provide a different interface to the optimizer than the way you're calling it. But such things are what we want to investigate and discuss via example code files and notebooks in the astropy-model-ideas repo.