Open Tim212019 opened 5 years ago
Hi,
Pesto is first of all a toolbox for optimization and uncertainty analysis. It does not handle ODEs by itself. Hence, it does not enforce parameter positivity. So technically, you should not need to adapt anything in your code, at least I don't see where and how... The examples only show logarithmic parametrization, but that's not a must. So you can only parametrize your system in a different way... Can you maybe explain your code in a bit more detail?
Best, Paul
There's no mistake in the first place. It's just that Pesto does not exponentiate your parameters. So define your bounds to be e.g. lb = [-5, -5] and ub = [3, 3]. The question is only what happens in your objective function. But this file is what you write yourself. Hence, just don't exponentiate the parameters in the objective function. Which tool do you use to integrate the ODE?
I am sorry, but I really do not understand what you mean, especially when you write about the "correct function"... You will have to post an example of your objective function here...
Pesto and ode45 do not enforce anything on the parameters by themselves. All that Pesto needs is an objective function, that takes a parameter and returns a cost function (and a a gradient), it does not alter its structure or doesn't assume anything. Hence, without seeing your implementation, I can't judge where the problem is...
Here's an example of your main function, or how it could look like:
% Preliminary
clear all;
close all;
clc;
% True parameters
theta_true = [-1;-3];
% parameters
parameters.name = {'theta_1)','theta_2'};
parameters.min = [-6,-6];
parameters.max = [ 2, 2];
parameters.number = length(parameters.name);
% Log-likelihood function
objectiveFunction = @(theta) objectiveExample(theta);
% Options
optionsPesto = PestoOptions();
optionsPesto.obj_type = 'negative log-posterior';
optionsPesto.n_starts = 20;
optionsPesto.objOutNumber = 1;
optionsPesto.mode = 'visual';
optionsPesto.plot_options.add_points.par = theta_true;
optionsPesto.plot_options.add_points.logPost = objectiveFunction(theta_true);
% Optimization
parameters = getMultiStarts(parameters, objectiveFunction, optionsPesto);
% compute profiles for uncertainty quantification
parameters = getParameterProfiles(parameters, objectiveFunction, optionsPesto);
Here's an example of your objetive function, or how it could look like:
function varargout = objectiveExample(theta)
% here's some data
data = [0.5; -1.8; -3.3; -10.2];
% Here a little "if" for not integrating unecessary sensitivities, if
% only the objective itself is required
if nargout < 2
% Right hand side an options
f = @(t, x, theta) theta(1) * x(1) + theta(2) * exp(t);
x0 = 0;
odeOptions = odeset('RelTol', 1e-5, 'AbsTol', 1e-8);
% output for t
tout = [0, 0.5, 1, 2];
% integrate the ODE
[~, xout] = ode45(@(t,x) f(t, x, theta), tout, x0, odeOptions);
% sum up objetive function
obj = 0.5 *sum((xout - data).^2);
varargout{1} = obj;
elseif nargout == 2
% Here comes your code with sensitivities
else
error('Call to objectie with too many outputs!')
end
end
This code infers negative parameters for an objective function based on your ODE and some random data. Doe that help you?
Well... the reason why I do not necessarily want to discuss in german is that github issues also serve as documentation of the toolbox. Whenever another user has a problem with the software, he/she/* can just browse through the issues and read up. And we're in science... There's not really a way around English. So, sorry, but I will stick to english.
The problem you have is simply in the line
[~,X] = ode15s(@(t,x) f(t, x, exp(theta)), t, x0(exp(theta)));
Here, you exponentiate theta
. If you want it to be possibly negative, just don't exponentiate it. This isn't a problem of Pesto, it's a problem in your code. (And this is up to your responsibility, not mine.) I gave you a rather detailed, but minimalistic example using your ODE and negative parameters.
I have the impression that you're pretty new to the stuff. That's totally okay, and it's good that you start out with an easy example. Good choice. Try to first really fully understand every line of the code in your (or my shorter) example and what it does. Do some rubber duck debugging. Digg through it. That's possible, you can do that. But the concrete problem which you have is that your code doesn't do what you want it to do. And I think that's because you don't fully understand it. In that case, I feel like I don't actually help you if I clean up everything in there, but you have to go through this yourself to learn it.
Little example:
In optimization, your local optimizer will every now and then need the gradient, but within one optimization step, it performs a line-search, for which it doesn't need a gradient. So the objective function will be called without gradient. In that case, you don't need to compute sensitivities, because you don't need a gradient. So you don't want to integrate them, as this is more time-consuming, because the full ODE-system is bigger. So make an "if" statement on how the cost function was called: with one, two, or three outputs.
Alternatively: Pesto can compute gradients based on finite differences, if you set options.objOutNumber = 1/2/3
, or how many derivatives your objective function can generate. For prototyping, that can be useful... (I did that in my example code.)
Maybe look into more examples than the conversion reaction and try to really get every line in the code. That will help you. Good luck and carry on! :)
Hello,
i have a non-linear ode-system and wanted to define the parameter min and max. Because the parameter is negative, i have a problem with the borders, because it only can estimate positive parameters.
how can i fix this without chanching the (only positive) foreward sensitivitys?