Closed gdalle closed 1 year ago
Thanks for the great questions! Totally agree they're important to dig into. We're already over the word limit set by JOSS (1363 of the 1000 permitted words), so we had to cut some stuff. We're all ears if you've got ideas on what to trim to make room for these points.
About exact vs. variational inference: Basically, exact methods are super accurate but can be a computational nightmare, while approximate methods, like those used in RxInfer.jl, scale better but are less precise. It's a trade-off between speed (or scalability) and accuracy. One notable drawback of approximate methods is that they lack formal accuracy guarantees, which is a challenge that is, in itself, NP-hard to resolve.
Regarding Julia packages, I know RxInfer.jl best. It's big on approximate methods (mainly variational) for continuous variables (it does seem to support categorical distributions, which are discrete, but this is definitely not the package's strength). This is why we didn't pick it as a reference to compare against. The others, like GraphicalModelLearning.jl and BayesNets.jl, seem sketchy on details based on their docs, so they'd be a pain to properly evaluate with our benchmarks.
Every few years, there's a UAI inference competition. We checked out the contenders there and picked Merlin and libdai for benchmarks (both implemented in C++). They're open-source, strong in exact inference, and well-documented. Our initial benchmarks are promising and I'm rerunning them now. Happy to share these once they're done and could even pop them into the JOSS paper if you think that'd be helpful.
Beyond the usual reasons to go for Julia—like its speed, the power of multiple dispatch, and its intuitive mathematical syntax—one major factor stands out: the tensor network field has seen a lot of action recently, and much of that innovation is happening right in the Julia ecosystem.
Hope this sheds some light! We'd love to hear what you think could be trimmed from the paper to make room for some of the insights we've discussed here.
We included a discussion of the trade-off between exact and approximate inference in the paper and explained why we opted for an exact approach. The paper now mentions renowned packages for both approximate and exact inference. We also added a reference to the performance evaluation in our package's documentation, which compares TensorInference.jl with its predecessor JunctionTrees.jl, as well as with Merlin and libDAI.
much better!
The only explicit mention of alternatives provided in the JOSS paper is the JunctionTrees.jl package, by the very same author. This is not enough to get an accurate picture of the field.
Some questions I have are:
A quick search yielded the following related Julia packages, but there are probably more:
Plus the heavyweight probabilistic programming languages like Turing.jl and Gen.jl
https://github.com/openjournals/joss-reviews/issues/5700