Testing for conditional independence, X \indep Y | Z, is a common problem within causal discovery and feature selection. The following two kernel-based methods are able to perform this test without too many assumptions.
A Permutation-Based Kernel Conditional Independence (KCIP) Test [paper][matlab code]
Potentially an improvement to KCI but not as widely used or known, partially due to speed constraints.
Computes kernel matrices in each of the variables X, Y, Z.
Also provides a two-layer bootstrap permutation test by:
Finding a permutation Y' of Y based on minimizing the permuted Z distances.
Performing a two-sample test (MMD) on the original (X, Y, X) and permuted (X, Y', Z)
Improves upon KCI when it's null is not well specified (compelx, higher-dimension Z), or if Z can be clustered well or is discrete.
Also provides analytic approximates the null distribution using a Gamma distribution.
A nonparametric test based on regression error (FIT) [paper] [python code]
A bit more fringe than KCI/KCIP but provides good simulation comparisons between all three methods plus more.
Uses a nonparametric regression (in their case, a decision tree) to examine the change in predictive power based on including versus excluding some variables Z.
Uses the mean squared error as a test statistic and an analytic Gaussian/T-test approach to compute a pvalue
Seemingly efficient for large samples sizes as compared to other kernel based approaches.
Interesting connections in that trees/forests are adaptive kernel methods and extensions to forests/honesty/leaf permutations.
Testing for conditional independence, X \indep Y | Z, is a common problem within causal discovery and feature selection. The following two kernel-based methods are able to perform this test without too many assumptions.
Kernel Conditional Independence (KCI) Test [paper][matlab code]
A Permutation-Based Kernel Conditional Independence (KCIP) Test [paper][matlab code]
A nonparametric test based on regression error (FIT) [paper] [python code]