Closed yasu-sh closed 1 year ago
BTW I'm pretty sure this will be much faster using py-tetrad in R...
@jdramsey Yes! This issue was already solved by my pull-request last year.
The selection of R's implementation is important since the performance changes a lot. R has lazy evaluation and we need to check where is hot-spots. https://github.com/bd2kccd/r-causal/blob/fc370245e938d5a2cb6e6abd4548bf7107fdd1dc/R/tetrad_utils.R#L336
Ah, thanks!
I am using alarm dataset of bnlearn and the synthetic discrete data, which has 20000 samples. The reading time is about 60secs and causal discovery time is about 2-3 secs.
I feel like to improve the time consuming situation and ideas:
it would be making good result.