Closed wmkouw closed 1 year ago
@wmkouw lets put it into our feature list. There is one nuance though, ReactiveMP.jl materializes (read compute) messages at the moment of computing individual marginals. As an example, suppose a variable x
is connected to two nodes f_1
and f_2
. At the moment then ReactiveMP.jl
"sends" a message from f_1
in fact it sends a "notification" that a message could be computed. An actual computation will happen then ReactiveMP.jl
will "send" a message from f_2
. At this moment inference backend "stops", computes all relevant messages and calculates the associated marginal. In principle, there is undefined period of time between the moment of "notification" and actual message computation. (It is possible to force ReactiveMP.jl to compute a message in place (with as_message
function).)
My proposal: Would it be useful to have an option to extend marginal and to attach to its structure messages that have been used to compute it? For example you could have a marginal for q(x_1)
and you could have an access to messages f_1_x1
, f_2_x1
that were used to compute q(x_1)
.
@bvdmitri In general I think that would be a useful feature. I have been doing similar operations manually in order to debug nodes/agents and having that be straightforward would be very handy indeed!
Sorry for the late response; I was on vacation.
There are two reasons for inspecting messages: 1) debugging and 2) education. Sepideh showed me an @call_rule
macro to force RMP to compute a message or marginal. I think that's perfectly fine for education purposes; last year we compared manual message calculations with ForneyLab's message computations and I think the macro will allow me to do that with RMP this year.
As for debugging, it sounds like your proposal (a user switch to append constituent messages f_1_x1
and f_2_x1
to the marginal object q(x_1)
) would meet our needs. I suggest we try it and if it turns out to be insufficient, then we can think of other solutions.
If you want, Sepideh, Tim and I can try to implement this feature?
This issue may be solved by the addons in issue #172 and PR #215. Basically (once completed) it should be possible to save extra information in messages and marginals. This information could be the messages used to compute a marginal for example. Some kind of history object could be implemented with these addons that shows the full computation history of the messages/marginals. Would this solve this issue @wmkouw, @abpolym, @Sepideh-Adamiat ?
Possibly. If I understand you correctly, you want to replace the scale factors in ScaledMessage
with the history object? If that object is exposed to the user, then I can imagine it would indeed let us inspect the messages leading up to a bug / unexpected behaviour.
But I would argue for making this user-friendly, as in, there would be a keyword argument in inference
that automatically creates that history object (KeepEach()
I imagine?) in place of the scale factors and returns it as an entry in results. What is the intended protocol for ScaledMessage
?
In #215 we introduce so-called addons
. These yield extra information and are propagated in the message. A message
is now (approximately) defined as:
struct Message{D,A}
distribution::D
addons::A
end
In this addons
field we can pass extra pieces of information, which could potentially also be used to keep a memory of the preceding messages/marginals. For the user this won't be a burden as this is just as easy as specifying MeanField
. An example for scale factors:
@model [ addons = ( AddonLogScale(), ) ] function model(...)
# model here
end
For the memory idea, this could become something like
@model [ addons = ( AddonMemory(KeepLast()), ) ] function model(...)
# model here
end
Memory is then easily accessible from the resulting marginals that you obtain at the end of the inference procedure.
What's the status of this issue?
@wmkouw Do the memory addons provide the functionality you needed?
I don't know. @abpolym and @Sepideh-Adamiat would look into this but we haven't discussed it recently. I'll check with them when I get back to work.
Should be fixed in #240
This is wonderful! Works entirely as advertised. I tried
@model function regression()
x = datavar(Float64)
y = datavar(Float64)
w ~ Normal(mean = 1.0, var = 1.0)
y ~ Normal(mean = x*w, var = 1.0)
end
results = inference(
model = regression(),
data = (x = 0.5, y = 0.0),
returnvars = (w = KeepLast()),
initmessages = (w = NormalMeanVariance(0.0, 100.0),),
free_energy = true,
addons = (AddonMemory(),),
)
and got
Marginal(NormalWeightedMeanPrecision{Float64}(xi=1.0, w=1.25))) with (AddonMemory(Product memory:
Message mapping memory:
At the node: NormalMeanVariance
Towards interface: Val{:out}
With local constraint: Marginalisation()
With addons: (AddonMemory(nothing),)
With input marginals on Val{(:μ, :v)} edges: (PointMass{Float64}(1.0), PointMass{Float64}(1.0))
With the result: NormalMeanVariance{Float64}(μ=1.0, v=1.0)
Message mapping memory:
At the node: *
Towards interface: Val{:in}
With local constraint: Marginalisation()
With meta: TinyCorrection()
With addons: (AddonMemory(nothing),)
With input messages on Val{(:out, :A)} edges: (NormalMeanVariance{Float64}(μ=0.0, v=1.0), PointMass{Float64}(0.5))
With the result: NormalWeightedMeanPrecision{Float64}(xi=0.0, w=0.25)
),)
That's a lot of information, which will be very useful, I think.
We'll need an example of how to use this for debugging in the documentation. I also noticed that AddonMemory
is not included in test_addons.jl
. Tim, Sepideh and I can pick this up? That would give us a chance to become familiar with it.
Sounds good! Looking forward to the example. Perhaps we can even highlight this as a separate header in the docs of RxInfer.jl
, as a lot of people are looking for this feature. @bvdmitri what do you think?
Yes! We can start something like "Debugging" section in the documentation, where addons will be one part
Sepideh and I will make a draft for a debugging section. We will aim for a PR in late March.
Perhaps this also relates to #60, asking for a "sharp bits" section.
Status update: we're working in branch rmp#162 of RxInfer to add a Debugging section to the docs there. If it should be part of the ReactiveMP docs instead, let us know.
@bvdmitri what do you think, I am fine with just having it in RxInfer.
I think it's fine. User friendly high level guides/tutorials should be in RxInfer. ReactiveMP should only give API description
The current Debugging.md
section is just a start. I propose we add explanations to it when we develop new ways to debug RxInfer/ReactiveMP code.
Closing this now due to #326 and RxInfer.jl#123
Do we have a procedure for inspecting specific messages during inference?
In ForneyLab, the
step!(data, marginals, messages)
functions accepted an emptyArray{Message}()
that would be populated during inference. That array could be inspected and visualized, which was useful for debugging and education purposes (see for instance here).I had a quick look through ReactiveMP's API (e.g., https://biaslab.github.io/ReactiveMP.jl/stable/lib/message/) but couldn't find a method for accessing specific messages sent by nodes. I know you can expose individual nodes through
node, variable ~ Node(args...)
(GraphPPL API), but does that also let you access messages belonging to that node?I imagine that if we have a reference to a specific message, then we can define an observable that subscribes to it and keeps the messages computed during inference for later inspection.