Something with the Gamma convergence is still wacky – I suspect maybe tf.igamma might have numerical issues with its gradient. Ended up just using Powell for it, which doesn't require any gradient information. Full circle back to the pre-Tensorflow era.
Coverage decreased (-0.2%) to 91.534% when pulling c38fa120a662950e276ff141510d62c723ee289c on scipy-optimizer into ef01b4be0b419500299b4e3dffd4d1c9bed8c515 on master.
Pretty sweet: https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ScipyOptimizerInterface
Something with the Gamma convergence is still wacky – I suspect maybe
tf.igamma
might have numerical issues with its gradient. Ended up just using Powell for it, which doesn't require any gradient information. Full circle back to the pre-Tensorflow era.