sofia-calgaro / ZeroNuFit.jl

Bayesian unbinned fit of the neutrinoless double-beta decay
MIT License
1 stars 1 forks source link

Implement different signal priors #9

Closed sofia-calgaro closed 1 week ago

sofia-calgaro commented 2 weeks ago

Some ideas

tdixon97 commented 1 week ago

To do this we need to create custom distributions for the cases we care about (I guess they are not defined in Distributions.jl) This can be done like this:

# Define the custom distribution type
struct CustomDist <: ContinuousUnivariateDistribution end

# Define the support as (0, ∞)
Distributions.support(::CustomDist) = (0.0, Inf)

# Define the PDF (Exponential-like PDF)
Distributions.pdf(::CustomDist, x::Float64) = x > 0 ? exp(-x) : 0

# Define how to draw random samples (inverse CDF method for exponential)
Distributions.rand(::CustomDist) = -log(rand())

# Define the log-PDF
Distributions.logpdf(::CustomDist, x::Float64) = x > 0 ? -x : -Inf

# Define the CDF
Distributions.cdf(::CustomDist, x::Float64) = x > 0 ? 1 - exp(-x) : 0

# Create an instance of the distribution
dist = CustomDist()
tdixon97 commented 1 week ago

I tried quickly and it seems to work, Now for the distributions a uniform distribution of log(S) means:

$$ u= \log{(S)}$$

$$g(u) = \text{Uniform}[a,b]$$

So:

$$f(S) dS = g(u) du \implies f(S) = g(u) \frac{du}{dS} = g(u)/s $$

i.e. its a function of $1/S$ but cut from $e^a$ to $e^b$, evidently this distributions cannot be defined to ($S=0$) which is a known thing and we need some cutoff. Similar maths works for $\sqrt{(S)}$ but this time the transformation is to $1/\sqrt{(S)}$. Both of these require a lower cutoff I guess we can see what GERDA used

tdixon97 commented 1 week ago

I tried the following code to get a prior flat on log(S): Several of the methods are annoying to define (maths:

Define the support as (0, ∞)

Distributions.support(::MyCustomDist) = (0.0, Inf)

Define the PDF (1/S like)

Distributions.pdf(d::MyCustomDist, x) = (x > d.a) && (x<d.b) ? 1/(x*(log(d.b)-log(d.a))) : 0

Define how to draw random samples (inverse CDF method for exponential)

Distributions.rand(d::MyCustomDist) = exp(rand()*(log(d.b)-log(d.a))+log(d.a))

Define the log-PDF

Distributions.logpdf(d::MyCustomDist, x::Float64) = x > d.a && x<d.b ? -log(x)-log(log(d.b)-log(d.a)) : 0

Define the CDF

Distributions.cdf(d::MyCustomDist, x::Float64) = x < d.a ? 0 : x<d.b ? (log(x)-log(d.a))/(log(d.b)-log(d.a)) : 1

Create an instance of the distribution

dist = MyCustomDist(1,10)



The CDF and PDF (along with some sampled data) look ok, but we have to try to see what happens in BAT.

![image](https://github.com/user-attachments/assets/531af282-76f6-4f62-934a-0df8f6705d61)
![image](https://github.com/user-attachments/assets/397e3bff-9aeb-4133-83bf-75f87b3f0a49)
tdixon97 commented 1 week ago

Ok I tried it in BAT didnt seems to work Also found that in Distributions.jl they already have a log uniform: https://juliastats.org/Distributions.jl/stable/univariate/#Continuous-Distributions but the uniform sqrt(x) isnt there. Need to think of other ideas?

tdixon97 commented 1 week ago

We can try to follow it replacing the 1/x with 1/sqrt(x)

https://github.com/JuliaStats/Distributions.jl/blob/00b7fad421c606c1f7d58c38ab0114472e76a747/src/univariate/continuous/loguniform.jl#L1-L12