diana-hep / carl

Likelihood-free inference toolbox.
BSD 3-Clause "New" or "Revised" License
57 stars 22 forks source link

Deep copy of distributions and parameters #30

Open cranmer opened 8 years ago

cranmer commented 8 years ago

I have some code like this (similar to n-d example)

cc_parametrized_ratio = ClassifierRatio(CalibratedClassifierCV(
    base_estimator=clf, 
    cv="prefit",  # keep the pre-trained classifier
    method="isotonic"))
cc_parametrized_ratio.fit(numerator=p0, denominator=p1, n_samples=10000)

My understanding is that the fit in the last line is basically running the calibration. I would also guess that it is using the current value of the theano shared variables (parameters).

In the process of a likelihood scan when the parameters are changing, this would be changing both p0 and p1. Is it possible to keep p1 fixed? Ie. is there a way to take a snapshot of the distribution p1 that won't change as the theano variables change values?

glouppe commented 8 years ago

My understanding is that the fit in the last line is basically running the calibration.

Yes

I would also guess that it is using the current value of the theano shared variables (parameters).

Yes

In the process of a likelihood scan when the parameters are changing, this would be changing both p0 and p1. Is it possible to keep p1 fixed?

It would be changing only if you explicitly change them. If you want to keep p1 fixed, then you should not change its parameters (and in particular, you should not define p1 using parameter and/or components objects shared across distinct distributions).

Does that answer your question?

e. is there a way to take a snapshot of the distribution p1 that won't change as the theano variables change values?

Notwithstanding, it might be helpful to define a clone method for making a deep copy of a distribution along with all its parameters (such that the parameters of the clone are actual distinct copies of the original parameters)