jpivarski / causality-not-actual

https://jpivarski.github.io/causality-not-actual/lab/index.html?path=causality-not-actual.ipynb
1 stars 0 forks source link

Attempt at a definition of causality #2

Open jpivarski opened 7 months ago

jpivarski commented 7 months ago

"A causes B" doesn't simply mean "A came before B," even accounting for special relativity (limiting "before" to the past light-cone). It has to have some notion of B happening "because of" or "due to" A, or that B "would be" if A.

There are other issues that complicate the definition of "A causes B" in practical settings:

  1. something else, A', might also cause B (i.e. B would happen if either A or A' happened)
  2. A might influence B, making it more likely, but not 100% probable.

Both of these can be dealt with, and have been, by causal networks (to deal with issue 1) and Bayesian methods (to deal with issue 2). These extensions are well-covered elsewhere.

What bothers me more is clarifying "because of," "due to," and "would be." This is where I think it's crucial to invoke possible worlds, as I do in the talk. In programming, the basic functions AND(x, y), OR(x, y), and NOT(x) are defined in terms of truth tables, by enumerating all of the possible values of boolean x and y, then saying what each function's output is. The step from function to truth table is converting a "would be" into an "is":

AND(x, y) would be if x and y are
false false, false
false false, true
false true , false
true true , true

The statement, "AND(x, y) would be true if x and y are both true" is a subjunctive statement, but the truth table is a declarative statement. Expanding to a set of possible worlds allows us to use ordinary declarative logic on the system; Tarski's modal quantifiers, $\square \varphi$ and $\diamond \varphi$, are just existential quantifiers $\forall \varphi$ and $\exists \varphi$ over the set of values the predicate $\varphi$ can take in all the possible worlds. We can use everything that we know about $\forall$ and $\exists$ on this set, just remembering that it's referring to a set of worlds for which only one is actually true.

So, given a simple causal network

flowchart LR
    A --> B

(to deal with issue 1) and no Bayesian probabilities other than 0% and 100% (to deal with issue 2), "A causes B" means that this is the truth table:

A B
false false
true true

Of the 4 potential combinations,

A B
false false
false true
true false
true true

only 2 are in the set of possible worlds. We could talk about a large set (of 4) possible worlds without this causal rule and a small set (of 2) possible worlds with this causal rule. Like the bishop-on-chessboard in #1, the large set (of 4) possible worlds is the large description (number 1), in which the rules are constraining, but the small set (of 2) possible worlds is is the shrink-wrapped description (number 2), in which there is no rule and there are fewer entities to talk about. In the small description with truth table

A B
false false
true true

you could say that A and B are equivalent, different names for the same quantity, and so there is only one quantity.


Bottom line: I suppose, then, that causality is a function that maps from a large set of possible worlds to a smaller set of possible worlds. If we allow probabilities other than 0% and 100%, I think it might be a mapping of distributions, rather than sets, with the sets being the support of those distributions. In the domain of that function, the causal rule is expressed with the subjunctive tense, "would be," and in the image of that function, there is no causal rule and everything is declarative.

This aligns with my intent in the talk, that causality has everything to do with relating possible worlds—you can't even think about causality if you restrict your attention to only the actual, existing world—and so it's a way of describing truths but isn't an actual truth itself.


Another angle: narratives (stories) about the real world are constrained by the facts of the world—you can't truthfully tell the story about why Hubert Humphrey became president in 1969 because he didn't. You can tell several different stories about why Richard Nixon became president in 1969 and some would align better with the truth than others. (Historians could reject a story as not being in accord with the facts, even if it doesn't explicitly tell or rely on something factually false.)

Causes are very precise narratives; they're narratives on the far end of mathematical precision. Causes can be objectively declared accurate or inaccurate, given the set of possible worlds that you're dealing with. (Just as with Russell's universal set, one must carefully quantify what is meant by "anything.")

But, as narratives, causes are not facts about the world, they organize facts into an explanation.