Added script for running benchmarks of the models implemented in the Julia library ProbabilisticCircuits.jl, i.e., only RAT-SPNs and HCLTs at the moment.
The implementation of the RAT-SPNs is the same as the one in other repos:
https://github.com/cambridge-mlg/RAT-SPNhttps://github.com/SPFlow/SPFlow
There is however a discrepancy relative to the number of product units and number of parameters of the networks between the different implementations of RAT-SPNs and the one in ProbabilisticCircuits.jl. This is explained by (i) the fully factorisation of multi-variate input distributions is implemented introducing additional product units, and (ii) the fully factorisation is balanced by alternating dummy sum units and product units (to speedup their CUDA kernel perhaps?).
The benchmarks measure time and peak GPU memory required to perform EVI and MPE (with 50% missing features) inference on the training split of CIFAR10, but this could change in the future. The results are saved in JSON files.
Added script for running benchmarks of the models implemented in the Julia library
ProbabilisticCircuits.jl
, i.e., only RAT-SPNs and HCLTs at the moment.The implementation of the RAT-SPNs is the same as the one in other repos: https://github.com/cambridge-mlg/RAT-SPN https://github.com/SPFlow/SPFlow There is however a discrepancy relative to the number of product units and number of parameters of the networks between the different implementations of RAT-SPNs and the one in
ProbabilisticCircuits.jl
. This is explained by (i) the fully factorisation of multi-variate input distributions is implemented introducing additional product units, and (ii) the fully factorisation is balanced by alternating dummy sum units and product units (to speedup their CUDA kernel perhaps?).The benchmarks measure time and peak GPU memory required to perform EVI and MPE (with 50% missing features) inference on the training split of CIFAR10, but this could change in the future. The results are saved in JSON files.
Closes #1 .