Closed BradyPlanden closed 2 months ago
This issue seems to have been resolved by PR #213.
In light of API changes, the example is now:
import pybop
import numpy as np
parameter_set = pybop.ParameterSet.pybamm("Chen2020")
model = pybop.lithium_ion.SPMe(parameter_set=parameter_set)
# Fitting parameters
parameters = [
pybop.Parameter(
"Positive electrode diffusivity [m2.s-1]",
prior=pybop.Gaussian(3.43e-15, 1e-15),
bounds=[1e-15, 5e-15],
),
]
sigma = 0.001
t_eval = np.arange(0, 900, 2)
values = model.predict(t_eval=t_eval)
corrupt_voltage = values["Terminal voltage [V]"].data + np.random.normal(
0, sigma, len(t_eval)
)
dataset = pybop.Dataset(
{
"Time [s]": t_eval,
"Current function [A]": values["Current [A]"].data,
"Voltage [V]": corrupt_voltage,
}
)
# Generate problem, cost function, and optimisation class
problem = pybop.FittingProblem(model, parameters, dataset)
cost = pybop.SumSquaredError(problem)
optim = pybop.Optimisation(cost, optimiser=pybop.GradientDescent)
# optim.optimiser.set_learning_rate(0.025) # replaced by sigma
x, final_cost = optim.run()
print("Estimated parameters:", x)
Feature description
When optimising parameters of varying scale, we overshoot acceptable parameter ranges during fitting (for non-bounded methods). Transforming / normalising the parameter space on input to the optimiser should solve this issue. An example of this issue is:
Our implementation of Gradient Descent (#88) is unbounded and immediately select a candidate solution orders of magnitude higher than an acceptable range.
Motivation
No response
Possible implementation
No response
Additional context
No response