For some of the examples it does achieve a better performance than Vegas in terms of error per a given number of events, however the training is considerably slower. I guess the fact that each iteration can take up to an hour of training is not really a problem if the integrand takes several hours to compute and then the gain of number of events will make up for it.
Since the RTBM code is not running Tensorflow (and also can't be run on a GPU), this can only be run on CPU and with run_eager on.
This branch is very much a draft since there are still some outstanding points:
The optimization algorithm is the naivest of genetic algorithms at the moment. Tbh, it was just something to play around while I was getting the rest of the code to work, the fact that it achieves better performance than Vegas makes me very optimistic about the possibilities of this integration method.
Needs to be checked with more complex integrands (at the moment I've tried very simple integrands up to 5 dimensions).
It is very slow, that need to be at least improved on.
It is using a fork of the RTBM, which is tricky at best.
Use the RTBM (this branch in particular: https://github.com/RiemannAI/theta/pull/56) as the driver for the integration.
For some of the examples it does achieve a better performance than Vegas in terms of error per a given number of events, however the training is considerably slower. I guess the fact that each iteration can take up to an hour of training is not really a problem if the integrand takes several hours to compute and then the gain of number of events will make up for it.
Since the RTBM code is not running Tensorflow (and also can't be run on a GPU), this can only be run on CPU and with
run_eager
on.This branch is very much a draft since there are still some outstanding points: