Open kazuakiyama opened 1 week ago
Made some overdue reinfrastructure work in StationaryRandomFields.jl to make it happen.
See https://github.com/EHTJulia/StationaryRandomFields.jl/issues/5.
We may want to consolidate VLBIImagePriors and VLBISkyRegularizers then. A lot of this overlaps between what is implemented in VLBIImagePriors. For instance, we already have L2 and TSV implemented there for a variety of base distributions including something similar to an exponential and a t-distribution. Additionally, you do need to be careful about what you call probabilistic. For example, TSV is not a proper prior since it is invariant under a total image scaling.
I am a little concerned about faking the rand call. rand
has precise semantics for a subtype of distribution, and there are specific sampling algorithms, e.g., parallel tempering, that will assume that when you call rand
you are drawing from the distribution exactly.
For the wavelet is actually sounds really easy to give that a probabilistic interpretation. We don't even need to make a prior in that case, we can just define a function that does the wavelet transform and just call that in the skymodel function. For instance
function sky(params, meta)
(; img) = params
img_trans = wavelet_transform(img)
m = ContinuousImage(img_trans, DeltaPulse())
return m
end
prior = (
img = MvLaplace(nx, NY), # We could implement this?
)
@ptiede
We may want to consolidate VLBIImagePriors and VLBISkyRegularizers then. A lot of this overlaps with what is implemented in VLBIImagePriors. For instance, we already have L2 and TSV implemented there for various base distributions including something similar to an exponential and a t-distribution.
Yeah, I would note that we don't have TSV but L1 and L2 are being implemented in StationaryRandomFields.jl as well. My preference is that for those that are already implemented to be covered by VLBIImagePriors or StationaryRandomFields.jl, we will internally call those but still this package provides an RML-like front end. Perhaps Documentation should suggest that a simpler way is possible for some priors with the existing Comrade.jl.
Additionally, you do need to be careful about what you call probabilistic. For example, TSV is not a proper prior since it is invariant under a total image scaling.
That is a great point. If we follows the terminology of Distributions.jl, how about calling TSV-like prior as "Sampleable" rather than "Probabilistic"?
I am a little concerned about faking the rand call. rand has precise semantics for a subtype of distribution, and there are specific sampling algorithms, e.g., parallel tempering, that will assume that when you call rand you are drawing from the distribution exactly.
I don't have an immediate resolution as I believe this is required for Comrade ecosystem. Maybe we should show warning message for this kind of rand-facked regularizers?
Your example
That's an interesting point for the wavelet L1 implementation. I think it is possible. It can be done with something like this with the development version (not in the main branch now as I wrote in the issue) of the StationaryRandomFields.jl.
using Distributions
using StationaryRandomFields
img = UnivariateLaplaceRandomField((nx, NY))
# dist can be specified if you change the scale parameter
img = UnivariateLaplaceRandomField((nx, NY), dist=Laplace(0, 1/hyperparameter))
@ptiede @anilipour
I'm thinking about the possibility of adding more probabilistic versions of some regularizers. For instance,
It may be interesting to add full probabilistic versions for these regularizers (not replacing at this moment). By doing that we have extra benefits including
On the other hand, isotropic TV and MEM do not have a good analytic probabilistic counterpart. Also a sum of multiple regularizations will not have a probabilistic counterpart (l2+TSV would be a special case). This can be handled by adding a trait. See an example here in
ComradeBase.jl
. You can do by setting up traitsJust in a similar way with ComradeBase modifiers, you can define the operator, for instance, any operators (
+
,-
) between regularizers make the resultant output asNotProbabilistic
(the only exception is L2 and TSV). On the other hand, the multiplication of a factor (*
,-
) will just scale the probabilistic function, and therefore it won't change the probabilistic nature.What do you think?