Closed Curvedrain closed 2 months ago
What a great test. Sorry for the delay in responding, but this was in the back of my head for a while. I was having trouble figuring out what was happening because I didn't read your final comment, and then something clicked:
https://github.com/jmschrei/pomegranate/blob/master/pomegranate/factor_graph.py#L106
20 is the default number of iterations of LBP, encoded in both the BayesianNetwork
and the underlying FactorGraph
objects. In your example network, each iteration will cause the information to "climb" one node up. But when I set max_iter
to a higher number in the BayesianNetwork
object, I didn't see any difference... because I wasn't actually passing that value into the FactorGraph
object! So, that's the underlying bug here. I've fixed this for 1.1.0, which I will release soon. Please re-open if this continues to be an issue. I am adding this example to the unit tests, because the number of 1s should be equal to the maximum number of iterations (+1 for the evidence).
Hello, I've been doing some test code to see how far a node with evidence will impact the marginal probabilities (and predictions) of parent/child nodes. I have the following distribution, with the idea of having the 0th index become increasingly more likely, while the 1st index becomes increasingly less likely.
The predicted probabilities only seem to be correct to a degree. In the example, the evidence is that the last node is of index 1, and thus it should be predicted that all parents must have had index 1 as well, as having an index of 0 causes a 0% chance of achieving an index of 1 in any children. Any ideas why this may occur?