Closed Jekannadar closed 3 months ago
Thanks for this issue and the useful code snippet attached! Let me follow up internally to see who might best be able to assist you.
If I simply do not apply Standardization or Normalisation things seem to work, as in there are no OpimizationWarnings, although then I still get warned about not having standardized my data.
Are you getting warnings about unstandardized data with the code above, where you are passing the Standardize transform? That would not be expected behavior.
std = tensor([0.], dtype=torch.float64)
it seems to me that your data may be constant? Is this intended? What are the raw values you're passing in?
Thank you for your responses!
Are you getting warnings about unstandardized data with the code above, where you are passing the Standardize transform? That would not be expected behavior.
This is the case, yes, which is why that struck me as a bit odd.
std = tensor([0.], dtype=torch.float64)
it seems to me that your data may be constant? Is this intended? What are the raw values you're passing in?
Thank you very much for this, this has fixed the issue with the Standardization. For testing purposes, one of the objectives currently always returns zero, this was indeed the issue. Hard to normalize all zeros to something in hindsight, a bit embarassing!
I do still get the following warnings, which seem to cost a lot of runtime. I do not get these warnings if commenting out the input and output transforms, and I could not find anything on google about this with that status that I could make much definitive sense of.
[OptimizationWarning('Optimization failed within
scipy.optimize.minimizewith status 2 and message ABNORMAL_TERMINATION_IN_LNSRCH.')] Trying again with a new set of initial conditions.
I would really appreciate any ideas regarding this!
Optimization warnings like the one you're getting can be hard to debug, unfortunately, but it likely has something to do with the data you're putting in. It's saying that the line search that is used to identify a step size during L-BFGS optimization failed to identify any step that would improve the objective value. This can happen due to a poorly conditioned Hessian matrix, which may have to do with your input data. Usually normalizing the inputs helps with this, but I suppose that wouldn't be true for every case.
The optimizer then tries again from a new starting point, so unless you're also seeing a warning that optimization failed entirely, the optimizer should be working, just sometimes taking twice as long as it has to.
Thank you for the clarification, I shall muck about with the input data then and see if anything improves stability!
Question
I am attempting to use Ax to optimize a set of hyperparamters for a computationally expensive function (a potentially long running gurobi model).
The actual setup ist working, however during execution I get the following warnings:
InputDataWarning: Data is not standardized (std = tensor([0.], dtype=torch.float64), mean = tensor([0.], dtype=torch.float64)). Please consider scaling the input to zero mean and unit variance.
and
RuntimeWarning: Optimization failed in gen_candidates_scipy with the following warning(s): [OptimizationWarning('Optimization failed within scipy.optimize.minimize with status 2 and message ABNORMAL_TERMINATION_IN_LNSRCH.')
If I simply do not apply Standardization or Normalisation things seem to work, as in there are no OpimizationWarnings, although then I still get warned about not having standardized my data.
Am I setting up the experiment wrongly? Should I not be using the Service API for this?
Please provide any relevant code snippet if applicable.
Code of Conduct