JuliaDynamics / Associations.jl

Algorithms for quantifying associations, independence testing and causal inference from data.
https://juliadynamics.github.io/Associations.jl/stable/
Other
150 stars 13 forks source link

PAI Calculation Issue #177

Closed norbertgerena closed 1 year ago

norbertgerena commented 1 year ago

Good day. Thank you so much for this library.

I just want to clarify the PAI calculation used here. In the original paper, they were predicting X using Mxy, however, in your code, you are predicting Y using Mxy.

Original code inside pai function:

Mₓy = crossmapembed(x, y, d, τ, PAIEmbedding()) 

# Compute `ỹ | Mₓy` and return `correspondence_measure(ỹ | Mₓy, y).
crossmap_basic(Mₓy, y, d, τ; correspondence_measure = correspondence_measure, r = r)

It should be:

Mₓy = crossmapembed(x, y, d, τ, PAIEmbedding()) 

# Compute `x | Mₓy` and return `correspondence_measure(x | Mₓy, x).
crossmap_basic(Mₓy, x, d, τ; correspondence_measure = correspondence_measure, r = r)

Also, If you look closely at crossmap_basic under utils, the corresponding y[t] used is off by (d-1)tau. It should be ` ỹ[i] = sum(w[k]y[j+(d-1)τ] for (k, j) in enumerate(J))` since the indices in J will start from 1 even if it is really referring to (d-1)tau to L timesteps.

kahaaga commented 1 year ago

Hi there, @norbertgerena! Thanks for letting us know, and for the detailed feedback. I will have a look at this immediately.

norbertgerena commented 1 year ago

Thank you @kahaaga. I was trying to replicate the result of the original PAI paper, specifically figure 6b and 6d, but I could not replicate it, even after using the codes of the authors.

Hopefully, they respond to my pull request.

image

kahaaga commented 1 year ago

As part of addressing this issue, I'll try to reproduce the same figure myself. That would be nice to have in the documentation anyways.

kahaaga commented 1 year ago

@norbertgerena It's been some time since I looked at the source code for these methods, so I just implemented both the cross mapping (CCM) and pairwise asymmetric inference (PAI) methods from scratch again, instead of trying to debug.

Reproducing original figures

In the attached screenshot, I use the new implementation to reproduce figure 3 in Sugihara et al (2012) (first row), and McCracken & Weigel (2014) (second row). I use the same data for both methods. Here, I use embedding dimension d = 2 for CCM, and embedding dimension d = 2 + 1 for PAI (+1 due to the extra variable included in the embedding).

Note:

Screenshot 2022-12-01 at 02 59 10

Note: in the upper row, I used source-only embeddings. In the lower row, I use mixed embeddings. I just forgot to change the notation

As expected, for PAI the correlation is much stronger when the target variable is also included. Not do the predictions match the observations much more closely (the last two columns of the figure), but convergence is also much faster (first column of the figure).

A note on CCM/PAI

It's been quite a few years since I did a deep literature dive into the CCM & friends world, so I don't know what current state of the art is. However, I'd like to note a few informal things regarding the relationship between PAI and CCM.

Let's reiterate what PAI does. Denote ŷ | X̂ the predictions we make by finding neighbors in the embedding X, and ŷ | X̂Y the predictions we make by finding neighbors in the mixed embedding XY. As I see it, the take-away from the McCracken & Weigel paper is that

ŷ | X̂Y is more strongly correlated to the observations y than ŷ | X

But of course predictions of Y get better when I include Y in the set of variables I use to make predictions. That's like saying "does my diet contain more carrots when I explicitly bring a carrot to every meal, compared to when I just eat whatever is served" (poor analogue, sorry :D). The answer is always yes.

I think a useful way to think about the PAI statistic is in terms of degradation of prediction performance. If I used ŷ | Y, my predictions would probably be near perfect every time (controlled by number of neighbors, noise, etc, of course). If I used ŷ | X (as in CCM) the quality of predictions would, among many factors, be controlled by how strongly X and Y and coupled. Clearly, predictions are worse for ŷ | X than for ŷ | Y, unless X and Y are completely synchronized.

The moment I start including additional variables from X by doing ŷ | YX, performance starts to degrade.

For some reason, the directional statistic Δ_pai = ŷ | X̂Y - x̂ | YX performs better than Δ_ccm = ŷ | X̂ - x̂ | Y as an indicator of directional influence, for the particular systems investigated in McCracken and Weigel (2014).

This is of course not rigorous in any way, but it helped me think about the algorithm when re-implementing it now.

Summary

If you're happy with how the reproduced figures look, @norbertgerena, I'll prepare a PR with the updated code and some examples for the documentation.

norbertgerena commented 1 year ago

Hi @kahaaga. Thank you. I'm happy with this.

I was able to reproduce fig 3 but I'm using Python (ccm_causlity package). To be transparent, I am trying to create a simple PAI library as well (not as elegant and comprehensive as yours :D)

image

Due to the inconsistencies in the generation of embeddings (when looking at different libraries, including the one from the PAI authors, https://github.com/jmmccracken/PAI/pull/2#issue-1467717305), I was planning to make the embedding more general. I have not finalized it yet but here's a snapshot:

image

Datseris commented 1 year ago

why not join efforts and develop the best library of all here ;)

norbertgerena commented 1 year ago

Hi @Datseris. I would love to! But first, I need to study Julia :D

Datseris commented 1 year ago

here is a very good tutorial to get started! I am totally unbiased in my opinion of it!

https://www.youtube.com/watch?v=Fi7Pf2NveH0

kahaaga commented 1 year ago

@norbertgerena Given that you've been able to debug the Julia code here, and that you're already programming in Python, I don't think the transition would be difficult at all!

How about this:

When I've got my new implementation of the CCM/PAI code above ready, you can have some time to inspect/understand it. Then, we do the following:

norbertgerena commented 1 year ago

@Datseris, subscribed! 😄

@kahaaga, I would love to contribute. Sounds like a plan.

norbertgerena commented 1 year ago

@kahaaga

Your library has been mentioned by the PAI authors themselves :)

image

kahaaga commented 1 year ago

Your library has been mentioned by the PAI authors themselves :)

Ah, nice! Looks like I need to get the new implementation out, so we can start fine-tuning it with the generic embeddings :D I'll try to have a basic, corrected version ready by tomorrow evening (CET time), as a last effort before Christmas holidays. If not done by then, it'll have to wait until mid next week.

Then I'll just make a new minor release which fixes the bug you found @norbertgerena , and we go from there with the generalized version, which I imagine we release as part of CausalityTools v2.

kahaaga commented 1 year ago

Maybe this should be in a separate issue, but just as a self-reminder for when implementing the generic embeddings:

PAI, by inclusion of the target variable in the source embedding, creates much stronger correlations (artifically, one may argue). I haven't tested it in detail, but perhaps there is a potentially new method here:

Weighting the different variables in the embedding according to some user-specified weights. That is: instead of letting the weights for the linear combinations strictly be a distance of the distance to nearest neighbor, we can make a double-weighting that also considers the "relative importance" of each variable, as specified by the user. This is a straight-forward extension of CCM/PAI that I think may be useful in practice for users that have a-priori information on the trustworthyness of each measured variable.

kahaaga commented 1 year ago

Hey, @norbertgerena! I think I'm just about done with the re-write of the cross mapping API (see #179).

In that process, I re-did the existing examples and added some more examples from Sugihara et al. and McCracken & Weigel. The updated docs are here.

As you pointed out, the embedding I used in the previous version was not the same as they used in the paper. I have now corrected this, so that the method does the same as in the paper (although it is in principle completely arbitrary which mixed embedding to use).

For the CCM stuff, I'm able to reproduce the original papers fairly well. But for the PAI stuff, I can't reproduce their figure 8 (upper right panel). As I understand it, they define Δ' = ρ(ŷ | yx) - ρ(x̂ | xy), where in

Or, equivalently stated:

McCracken & Weigel's paper is a bit vague about this, but in the preprint version of their paper they state

"If Δ' < 0, then single time step of Y added to the delay vectors constructed from X create stronger estimators of X than the single time step of X added to the delay vectors constructed from Y do for Y"

I'm getting similar results, but with the opposite sign. That is, reproducing their Fig 8, I'm consistently getting Δ' > 0, which would imply that Y drives X, not the other way around.

The fact that the sign is consistent tells me that either 1) I've made a mistake or misunderstood their explanations, or 2) they've flipped the sign in the paper.

Have you tried to reproduce that figure using your Python implementation? It would be nice to see if we're getting the same results before I publish the new version.

kahaaga commented 1 year ago

For the CCM stuff, I'm able to reproduce the original papers fairly well. But for the PAI stuff, I can't reproduce their figure 8 (upper right panel).

Nevermind, I had just made a mistake in the example calculations. I'll merge the bug-fix as soon as CI tests+docs pass.

norbertgerena commented 1 year ago

Happy new year! Sorry I’m still on vacation. Will look into this ASAP.Regards,NorbertOn Dec 27, 2022, at 11:19 PM, Kristian Agasøster Haaga @.***> wrote: Closed #177 as completed via #179.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>

kahaaga commented 1 year ago

Hey @norbertgerena! No worries. Just leave a comment on #184 when you're back when vacation and are available. Happy new year!

norbertgerena commented 1 year ago

Hi @kahaaga. Here are my results when reproducing figures 3 and 8 using our Python package.

Here's for CCM (fig 3):

image image image image

And here's for PAI (fig 8, using $Y_t $| ( $Yt$, $Y{t-1}$, $X_t$)):

image image image image

J. McCracken updated his PAI library to address the issue I raised. I'll check it out first then work on #184.

kahaaga commented 1 year ago

@norbertgerena Excellent! Thank you so much for making these reproductions.

I will try to do reproduce all the same plots. Here's my reproduction of the upper-right panel of Figure 8 in McCracken & Weigel (you can find this example in the CausalityTools docs)

Screenshot 2023-01-05 at 19 12 41

Before I can work on the remaining examples, I have I some urgent work that needs to be done first relating to the release of v2 of this package, so it will probably not happen until the weekend or next week. In the meantime, feel free think on how to elegantly deal with #184, and if you have some knowledge on what the new update in the original PAI repo entails, that would be excellent.