One idea raised was to speed MBAR up without losing precision is to start the simulation using minibatching; using something like ADAM or other stochastic gradient solvers as often used in ML. It won't be as accurate, but we can switch to Newton-Raphson when it gets close enough.
Looking at various implementations, it seems that using something like scikit learn would introduce too many dependencies and require squeezing the data into a weird shape, so reimplementing may be the best bet.
One idea raised was to speed MBAR up without losing precision is to start the simulation using minibatching; using something like ADAM or other stochastic gradient solvers as often used in ML. It won't be as accurate, but we can switch to Newton-Raphson when it gets close enough.
Looking at various implementations, it seems that using something like scikit learn would introduce too many dependencies and require squeezing the data into a weird shape, so reimplementing may be the best bet.