rfl-urbaniak / MRbook

0 stars 0 forks source link

outline awareness growth introduction #45

Closed rfl-urbaniak closed 5 months ago

rfl-urbaniak commented 5 months ago

@marcellodibello I outlined the general dialectics as I see it in the intro to the awareness paper. Ready for you to go over and perhaps start thinking about reshuffling the existing content.

marcellodibello commented 4 months ago

I am having trouble accessing the repository with this stuff.

marcellodibello commented 4 months ago

I have give some thoughts about this paper.

  1. The first thing would be just to state our theory up front. The theory has two parts, one for refinement and another for expansion. In the case of expansion a state is added to one of the nodes in the network. The network structure is not modified. This is easy to deal with since the network structure does not change. We have a constraint (C) that seems well suite to capture how to handle such cases, and it is close enough to reverse beyesianism. If the network structure is modified, a number fo subcaces must be considered. See point 3 below.
  1. One difference between our constraint (C) and reverse Bayesianism is that RB applies to posterior probabilities -- the ratio of posterior probabilities stays the same before and after awareness growth -- while constraint (C) is about the likelihoods as they are entered into the probabilities tables: the likelihoods stay the same.

  2. Let's consider cases when the network is modified. These are cases of refinement in our defintion. There are a several sub-cases here. The simples cases are those in which arrows and nodes are added in the periphery of the network. Say we have a network A - > B and we add a third node and arrow C either like this A - > B -> C. Here we only ned to add a new probability table for C conditional on B. Everything else stays the same. The veracity example is of this sort. (The same would apply if we added C coming off A, like this C < A > B. Now A would have two downstream nodes. We would only need to add a table for C conditional on A. )

  3. In a different case we add C upstream like this C - > A - >B. We do not discuss this example in the paper. Here what needs to be changed is the prior probability table for A. We need to add a prior probability for C and then a probability table for A conditional on C. I do not think we would retain the prior distribution for A, we should probably just get rid of it. Suppose we start with H - > E, with a prior distribution on H (say proportion of sick versus healthy people). Next we gather more information about Age and sickness (H). So we can find marginal distribution for H using information about diseases prevalence for different age groups and prior distribution about age. This new information would seem to displace the old prior assessment which was not based on clear evidence. There is a strange phenomenon that happens here because we are updating our priors even though we are not updating our beliefs in the standard sense. To be discussed. Seems a common phenomenon. All else stays the same in the graph except the priors for A.

  4. Next we can modify the network like this A -> B <- C. This is like the lighting example we consider in the paper. Here we need to change probability table for B because we need to condition by A and C which we did not have to do before. Everything else stays the same, but this part must be changed.

  5. There will be other cases and it is hard to consider all the combinations. One other common option is this. We start with A - > B and then we refine this into A > C > B, while perhaps also keeping A > B. We discuss briefly in footnote 18 at the end. Here, interestingly, it seems that that the conditional probability table B given A stays the same. So the two new conditional probability table for C given A and B given C should agree with the original probability table for B given A. If they do not, then there is some inconsistency in the data.

  6. So it is clear that in cases of refinement the conditional probability tables sometimes remain the same and it is interesting to ask when. The cases we have are these, starting with A > B. We can get: A >B > C (no changes, just add new table for C given B); C > A > B (remove table for A and add table for C priors and new table for A given C); A > B < C (new table for B given A and C); A > C > B (new tables for C given A and B given C, but they should agree with original table for B given A).

  7. Once we have outlined the theory and given some illustrative examples, we need to show that it is better than reverse bayesianism. The key idea is that we can make decisions about what to retain of our old distributions and what to get rid of by looking at the structure of the network. This process, however, is a bit unclear and impressionistic.

  8. Need to discuss what we mean by structural assumptions. These are those captured by the network; causal, semantic, common sense. Give illustrations.