Closed saurabh1deshpande closed 4 years ago
@saurabh1deshpande Did you also change the values accordingly? Because we use the order of evidence variables to infer which value corresponds to any specific state. For example, a CPD on G
with the evidence [D, I]
would expect a values array of the form:
D_0 | D_1
----|------------------------|--------------------
| I_0 | I_1 | I_0 | I_1
G_0 | | | |
G_1 | | | |
G_2 | | | |
whereas the evidence [I, D]
would expect the values array to be of the form:
I_0 | I_1
----|------------------------|--------------------
| D_0 | D_1 | D_0 | D_1
G_0 | | | |
G_1 | | | |
G_2 | | | |
As you can see, depending on the order of the evidence, different values can be assigned to each state, which would result in different results.
ah !..no I didn't..thnx for the explanation !
Hi, I was trying out 2. Bayesian Networks.ipynb and swapped the evidence while creating cpd for grade, ['D', 'I' ] instead of ['I', 'D'] and all the inference probabilities changed. Intuitively I feel the order should not change the inference probabilities.
l_dist = infer.query(['L'],evidence={'I':0, 'D':0}) print(l_dist.get('L'))
For ['I', 'D'] ╒═════╤══════════╕ │ L │ phi(L) │ ╞═════╪══════════╡ │ L_0 │ 0.6114 │ ├─────┼──────────┤ │ L_1 │ 0.3886 │ ╘═════╧══════════╛
For ['D', 'I' ] ╒═════╤══════════╕ │ L │ phi(L) │ ╞═════╪══════════╡ │ L_0 │ 0.3489 │ ├─────┼──────────┤ │ L_1 │ 0.6511 │ ╘═════╧══════════╛
May be this is a naive question but would like to know if it is correct behavior and what might be causing this ?
Thanks !