alashworth / test-issue-import

0 stars 0 forks source link

Solve systems of nonlinear equations #109

Open alashworth opened 5 years ago

alashworth commented 5 years ago

Issue by bgoodri Tuesday Aug 23, 2016 at 03:11 GMT Originally opened as https://github.com/stan-dev/stan/issues/2023


Summary:

Support solving a system of non-linear equations using one of the better supported unsupported modules in Eigen. An alternative would be to use KINSOL in Sundials, but I think Eigen would be easier.

Description:

To start with, let's assume the user has written a user-defined function that inputs a vector of length n and outputs a vector of length n that contains the values of the equations that are supposed to be zero at the solution. Also, let's assume the user has written a user-defined function that inputs a vector of length n and outputs a square matrix of order n that represents the Jacobian, i.e. the partial derivative of the i-th equation with respect to the j-th input goes in the i,j cell of this matrix.

(This is easier if these two functions are defined in a local functions block such that data and transformed data are in scope, but I need to create another issue for that.)

The call in the transformed parameters (or model) block of the Stan program would look like

transformed parameters {
  vector[n] solution;
  {
    vector[n] starting_values;
    // fill starting_values with something intelligent
    solution = dogleg(starting_values, equations_functor, jacobian_functor);
  }
}

where the signature of the dogleg function (or we could call it powell or something else) is

vector dogleg(vector, functor, functor);

As an overview, the dogleg C++ function would do these steps:

  1. Eigen::VectorXd theta = static_cast<double>(starting_values)
  2. Instantiate a Eigen::HybridNonLinearSolver with a suitable hybrj_functor
  3. Call the hybrj1 method of a Eigen::HybridNonLinearSolver with theta as the initial point
  4. Return a var vector after using the implicit function theorem to figure out the derivatives

In detail, for step 2, see the example at https://bitbucket.org/eigen/eigen/src/5a47e5a5b02e4d6ae1da98c2348f9c1cb01bdaf9/unsupported/test/NonLinearOptimization.cpp?at=default&fileviewer=file-view-default#NonLinearOptimization.cpp-245 The necessary hybrj_functor has an int operator()(const VectorXd &x, VectorXd &fvec) and an int df(const VectorXd &x, MatrixXd &fjac) that each return 0 on success and use the second argument to store the function values and Jacobian respectively. So, we need to create a hybrj_functor whose operator() calls the functor that is the second argument to dogleg and assigns the resulting vector to fvec while whose df() method calls the functor that is the third argument to dogleg and assigns the resulting matrix to fjac.

For step 4, it is just like https://en.wikipedia.org/wiki/Implicit_function_theorem#Application:_change_of_coordinates We need to evaluate the Jacobian at the solution and multiply its inverse by the negative of the solution vector to get the partial derivatives.

Reproducible Steps:

Does not currently exist

Current Output:

Does not currently exist

Expected Output:

A var vector such that if you pass that vector to equations_functor it returns a numerically zero vector and if you pass that vector to jacobian_function it returns a non-singular matrix.

Additional Information:

We need this for the models the Federal Reserve wants us to estimate and lots of economics models generally.

Current Version:

v2.11.0

alashworth commented 5 years ago

Comment by bob-carpenter Tuesday Aug 23, 2016 at 10:43 GMT


I'm happy to code up the the language parts, but you don't want me anywhere near any Jacobian or linear algebra code that matters.

We want to coordinate with @charlesm93 on this because there are PK/PD applications we want to be sure to handle. At least I think this is the same feature (I'm really not that good with all this linear algebra and solving stuff).

alashworth commented 5 years ago

Comment by billgillespie Tuesday Aug 23, 2016 at 13:40 GMT


This sounds like a good starting point for the kind of root finding functionality we would want for pharmacometrics (PMX) applications. Some initial thoughts:

alashworth commented 5 years ago

Comment by bgoodri Tuesday Aug 23, 2016 at 19:43 GMT


I think we are saying the same things here. Having the Jacobian be automatic would be nice, but I think it would require fwd mode, which isn't fully in place yet. I haven't used KINSOL either but my sense is that Powell's method, which Eigen implements, is the consensus choice for this sort of thing.

alashworth commented 5 years ago

Comment by bob-carpenter Wednesday Aug 24, 2016 at 11:20 GMT


Why would we need forward mode for an embedded Jacobian? In the ODE solver, we just use nested reverse mode.

Forward mode's ready to go, though it could use a lot of optimization. When it's written more efficiently, it should be much faster to calculate N x N Jacobians with N forward-mode calls rather than N backward-mode ones.

alashworth commented 5 years ago

Comment by bgoodri Thursday Aug 25, 2016 at 15:48 GMT


Eigen has a signature that omits the functor for the Jacobian and calculates the Jacobian by numerical differences. That might be less accurate than autodiff, but I think it is about as many flops as N reverse mode calls. In any event, we should start with the case where there is an analytical Jacobian and once that is done, the rest will be easy.

alashworth commented 5 years ago

Comment by bob-carpenter Thursday Aug 25, 2016 at 15:56 GMT


I'd think that would depend on how they do finite diffs to calculate a Jacobian. It could be O(N^2) rather than O(N) if they do each derivative separately.

alashworth commented 5 years ago

Comment by bgoodri Sunday Aug 28, 2016 at 18:04 GMT


@bob-carpenter Is this https://github.com/stan-dev/math/compare/feature/dogleg about what it needs to be on the Math side in order for you to generate the code for it on the Stan side?

alashworth commented 5 years ago

Comment by bob-carpenter Tuesday Oct 04, 2016 at 17:59 GMT


Yes, that would enable a functor dogleg() to be written as a special expression like integrate_ode() if that's what you're asking.

It'll be a while before I can get to higher-order functions within Stan!

On Aug 28, 2016, at 2:04 PM, bgoodri notifications@github.com wrote:

@bob-carpenter Is this https://github.com/stan-dev/math/compare/feature/dogleg about what it needs to be on the Math side in order for you to generate the code for it on the Stan side?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

alashworth commented 5 years ago

Comment by charlesm93 Monday Nov 07, 2016 at 22:10 GMT


I wanted to ask what the status of the dogleg function was. From what I can tell, the math code has been written but not tested (didn't find unit tests), and we need to expose it to Stan's grammar, in a similar manner than was done for the ODE integrators. I'm happy to get my hands dirty with both tasks.

We can start with a function that requires an analytical jacobian, though in the long run we'll want an automatic approximation of the Jacobian for the function to have broader applications.

alashworth commented 5 years ago

Comment by bgoodri Monday Nov 07, 2016 at 22:25 GMT


I think that is about the state of it. Doing an autodiffed Jacobian is easy to implement in reverse mode because the .jacobian() method is already implemented.

On Mon, Nov 7, 2016 at 5:10 PM, Charles Margossian <notifications@github.com

wrote:

I wanted to ask what the status of the dogleg function was. From what I can tell, the math code has been written but not tested (didn't find unit tests), and we need to expose it to Stan's grammar, in a similar manner than was done for the ODE integrators. I'm happy to get my hands dirty with both tasks.

We can start with a function that requires an analytical jacobian, though in the long run we'll want an automatic approximation of the Jacobian for the function to have broader applications.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-258979344, or mute the thread https://github.com/notifications/unsubscribe-auth/ADOrqjj0Xa_izX8cSVG71qoJKPl2sdpYks5q76HQgaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by charlesm93 Monday Nov 07, 2016 at 22:31 GMT


Actually let's take a step back. We need a way to pass in parameters and data. We could use the following signature:

vector equation(vector, real[], real[], int[])

vector dogleg(vector, functor, functor, real[], real[], int[]);

where the additional arguments contain parameters, real data, and integer data. These arguments should also work for the jacobian, which should depend on the same variables as the equation function.

alashworth commented 5 years ago

Comment by bgoodri Monday Nov 07, 2016 at 22:43 GMT


I personally would rather us implement the "local functions" block between the transformed data and parameters blocks so that the functions would be defined as part of the class and anything from the data and transformed data blocks would be in scope. Going the route of how the signatures for the integrateode* functions are defined is really cumbersome if the equations involve matrices.

On Mon, Nov 7, 2016 at 5:31 PM, Charles Margossian <notifications@github.com

wrote:

Actually let's take a step back. We need a way to pass in parameters and data. We could use the following signature:

vector equation(vector, real[], real[], int[])

vector dogleg(vector, functor, functor, real[], real[], int[]);

where the additional arguments contain parameters, real data, and integer data. These arguments should also work for the jacobian, which should depend on the same variables as the equation function.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-258984545, or mute the thread https://github.com/notifications/unsubscribe-auth/ADOrqjNbqBqT33zL_0fq2y5ldFbsqqq6ks5q76aogaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by syclik Monday Nov 07, 2016 at 22:46 GMT


+1 regarding additional functors.

What do you mean by "defined as part of the class"? I take it you mean the generated C++? I think we should think design first, then constrain design with implementation details later.

On Nov 7, 2016, at 5:43 PM, bgoodri notifications@github.com wrote:

I personally would rather us implement the "local functions" block between the transformed data and parameters blocks so that the functions would be defined as part of the class and anything from the data and transformed data blocks would be in scope. Going the route of how the signatures for the integrateode* functions are defined is really cumbersome if the equations involve matrices.

On Mon, Nov 7, 2016 at 5:31 PM, Charles Margossian <notifications@github.com

wrote:

Actually let's take a step back. We need a way to pass in parameters and data. We could use the following signature:

vector equation(vector, real[], real[], int[])

vector dogleg(vector, functor, functor, real[], real[], int[]);

where the additional arguments contain parameters, real data, and integer data. These arguments should also work for the jacobian, which should depend on the same variables as the equation function.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-258984545, or mute the thread https://github.com/notifications/unsubscribe-auth/ADOrqjNbqBqT33zL_0fq2y5ldFbsqqq6ks5q76aogaJpZM4Jqgvr .

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

alashworth commented 5 years ago

Comment by bgoodri Monday Nov 07, 2016 at 22:53 GMT


I thought we had a consensus on the design a few months ago:

vector dogleg_with_jacobian(function, function, vector) vector dogleg(function, vector)

alashworth commented 5 years ago

Comment by syclik Monday Nov 07, 2016 at 23:19 GMT


Where is the spec? I don't remember seeing an actual design. (I could have missed it and the decision.)

On Nov 7, 2016, at 5:53 PM, bgoodri notifications@github.com wrote:

I thought we had a consensus on the design a few months ago:

  • Introduce a "local functions" block between transformed data and parameters
  • Such functions would be member functions of the model class (like log_prob and whatnot) rather than floating around in the namespace. Thus, local functions can access objects declared in data and transformed data without having to pass them as arguments. The user would define a local function that inputs a vector of parameters and outputs a vector that is numerically zero at the solution.
  • Then functors like dogleg() would only need a minimal number of arguments, as in

vector dogleg_with_jacobian(function, function, vector) vector dogleg(function, vector) — You are receiving this because you commented. Reply to this email directly, view it on GitHub, or mute the thread.

alashworth commented 5 years ago

Comment by bgoodri Monday Nov 07, 2016 at 23:24 GMT


https://github.com/stan-dev/stan/wiki/Functionals-spec

On Mon, Nov 7, 2016 at 6:19 PM, Daniel Lee notifications@github.com wrote:

Where is the spec? I don't remember seeing an actual design. (I could have missed it and the decision.)

On Nov 7, 2016, at 5:53 PM, bgoodri notifications@github.com wrote:

I thought we had a consensus on the design a few months ago:

  • Introduce a "local functions" block between transformed data and parameters
  • Such functions would be member functions of the model class (like log_prob and whatnot) rather than floating around in the namespace. Thus, local functions can access objects declared in data and transformed data without having to pass them as arguments. The user would define a local function that inputs a vector of parameters and outputs a vector that is numerically zero at the solution.
  • Then functors like dogleg() would only need a minimal number of arguments, as in

vector dogleg_with_jacobian(function, function, vector) vector dogleg(function, vector) — You are receiving this because you commented.

Reply to this email directly, view it on GitHub, or mute the thread.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-258995315, or mute the thread https://github.com/notifications/unsubscribe-auth/ADOrqryIH9s39WqwEGmYvMVmTDN8ydiUks5q77H1gaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by bob-carpenter Tuesday Nov 08, 2016 at 03:26 GMT


On Nov 7, 2016, at 5:53 PM, bgoodri notifications@github.com wrote:

I thought we had a consensus on the design a few months ago:

Yes, we do. I could work on this next. I don't think it'd be that hard.

I'm realizing that a lot of the type inference we've been doing in the main body of Stan programs falls down for functions because of their templating. Can't tell when things are double or not, for instance, at compile time. It'd require reasoning about the instantiations of the functions. That was the bug in the conditional operator, by the way.

I've been thinking about adding the list/tuple type and it's going to mess with a lot of the basic code which assumes a type is (int|real|vector|row_vector|vector|matrix) with a number of array dimensions. The constraints don't play into the type system. But adding a tuple is different, because we need to define the types of the elements.

I can start thinking about general functional programming. It'd be awesome to add that. And with functors in C++, I think it might be possible.

alashworth commented 5 years ago

Comment by charlesm93 Tuesday Nov 08, 2016 at 13:52 GMT


I didn't realize local functions were an option we were contemplating. I agree functionals would be awesome and, among other things, would help a lot for the generalized event handler.

implement the "local functions" block between the transformed data and parameters blocks

You mean between the parameters and transformed parameters block, right? That way parameters can be used in the function.

alashworth commented 5 years ago

Comment by charlesm93 Monday Dec 05, 2016 at 19:00 GMT


Two points:

alashworth commented 5 years ago

Comment by bob-carpenter Monday Dec 05, 2016 at 20:13 GMT


No way we're going to get local functions by end of January. Too much going on between now and then and they're going to involve a ton of testing. But probably not long after that if it's the next big thing I do after the AST and generator refactor.

On Dec 5, 2016, at 2:00 PM, Charles Margossian notifications@github.com wrote:

Two points:

• Are we sticking with Powell's method? Michael suggested we would be better off with a fully gradient-based method since we'll be computing derivatives anyways. Powell's method is hybrid. We could go for Newton's method but it might not be stable enough for a lot of problems. • What is the time frame for implementing local functions? I'd rather do it "right" in the first go, but I'm shooting for a working prototype of the solver in Torsten before the end of January (deliverable for a grant), and I might create a quick and dirty working version. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

alashworth commented 5 years ago

Comment by charlesm93 Thursday Feb 02, 2017 at 18:53 GMT


As discussed in the meeting, I'm going ahead and developing a first version of the solver. I'll keep it modular, and I'll test with the dogleg method Ben began working on. Should I create a new branch or continue working on feature/dogleg?

alashworth commented 5 years ago

Comment by bgoodri Thursday Feb 02, 2017 at 19:17 GMT


Same branch

On Thu, Feb 2, 2017 at 1:53 PM, Charles Margossian <notifications@github.com

wrote:

As discussed in the meeting, I'm going ahead and developing a first version of the solver. I'll keep it modular, and I'll test with the dogleg method Ben began working on. Should I create a new branch or continue working on feature/dogleg?

— You are receiving this because you were assigned. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-277047328, or mute the thread https://github.com/notifications/unsubscribe-auth/ADOrqnc9ajME1LVlV-bN017QHK2kzEc_ks5rYiYVgaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by bob-carpenter Thursday Feb 02, 2017 at 19:24 GMT


Whatever's easiest in terms of branching.

From the meeting, we decided to go with an integrate_ode-like interface for now, then later simplify by removing some arguments. Ideally, there will be a clean call to the actual solver that we can plug and play with different solvers, but the main goal's to get one solver working and building and tested.

On Feb 2, 2017, at 1:53 PM, Charles Margossian notifications@github.com wrote:

As discussed in the meeting, I'm going ahead and developing a first version of the solver. I'll keep it modular, and I'll test with the dogleg method Ben began working on. Should I create a new branch or continue working on feature/dogleg?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

alashworth commented 5 years ago

Comment by charlesm93 Thursday Feb 02, 2017 at 21:10 GMT


I'm trying to figure out how to call dogleg. I seem to be doing something wrong with the functors (I wrote the functors as I would have written them for the ODE solver):

inline Eigen::VectorXd
algebraEq(const Eigen::VectorXd x) {
  Eigen::VectorXd y(2);
  y(0) = x(0) - 36;
  y(1) = x(1) - 6;
  return y;
}

struct algebraEq_functor {
  inline Eigen::VectorXd
  operator()(const Eigen::VectorXd x) const {
    return algebraEq(x);
  }
};

inline Eigen::MatrixXd
jacobian(const Eigen::VectorXd x) {
  Eigen::MatrixXd y(2, 2);
  y(0, 0) = 1;
  y(0, 1) = 0;
  y(1, 0) = 0;
  y(1, 1) = 1;
  return y;
}

struct jacobian_functor {
  inline Eigen::MatrixXd
  operator()(const Eigen::VectorXd x) const {
    return jacobian(x);
  }
};

TEST(MathMatrix, dogleg) {
  Eigen::VectorXd x(2);
  x << 32, 5;

  Eigen::VectorXd theta;
  theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor());
}

The compiler produces the following error message:

 error: no matching conversion for
      functional-style cast from 'const Eigen::VectorXd' (aka 'const
      Matrix<double, Dynamic, 1>') to 'algebraEq_functor'

and

error: no matching conversion for
      functional-style cast from 'const Eigen::VectorXd' (aka 'const
      Matrix<double, Dynamic, 1>') to 'jacobian_functor'
          fjac = F2(x);

I'm guessing I'm passing the wrong arguments to dogleg.

alashworth commented 5 years ago

Comment by bob-carpenter Friday Feb 03, 2017 at 07:09 GMT


It really helps to have line numbers for this, but I'm going to take a wild guess that when you call this

theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor()); }

and it says

The compiler produces the following error message:

error: no matching conversion for functional-style cast from 'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to 'algebraEq_functor'

that you have the arguments in the wrong order. I don't know what branch you're on. But check out the signature for dogleg and match it.

alashworth commented 5 years ago

Comment by charlesm93 Friday Feb 03, 2017 at 13:50 GMT


I just uploaded the code on feature/dogleg. The test is dogleg_test.cpp. I'm pretty sure I have the order right (changing the order did not fix the bug). I'll keep digging, but in the meantime, here's the full error message:

In file included from test/unit/math/prim/mat/fun/dogleg_test.cpp:2:
./stan/math/prim/mat/fun/dogleg.hpp:29:18: error: no matching conversion for
      functional-style cast from 'const Eigen::VectorXd' (aka 'const
      Matrix<double, Dynamic, 1>') to 'algebraEq_functor'
          fvec = F1(x);
                 ^~~~
./stan/math/prim/mat/fun/dogleg.hpp:26:14: note: in instantiation of member
      function 'stan::math::dogleg(const Eigen::VectorXd &, const
      algebraEq_functor, const jacobian_functor)::hybrj_functor::operator()'
      requested here
      struct hybrj_functor : NLOFunctor {
             ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:43:23: note: in instantiation of
      function template specialization 'stan::math::dogleg<algebraEq_functor,
      jacobian_functor>' requested here
  theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor());
                      ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:14:8: note: candidate constructor
      (the implicit copy constructor) not viable: no known conversion from
      'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to
      'const algebraEq_functor' for 1st argument
struct algebraEq_functor {
       ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:14:8: note: candidate constructor
      (the implicit default constructor) not viable: requires 0 arguments, but 1
      was provided
In file included from test/unit/math/prim/mat/fun/dogleg_test.cpp:2:
./stan/math/prim/mat/fun/dogleg.hpp:33:18: error: no matching conversion for
      functional-style cast from 'const Eigen::VectorXd' (aka 'const
      Matrix<double, Dynamic, 1>') to 'jacobian_functor'
          fjac = F2(x);
                 ^~~~
./stan/math/prim/mat/fun/dogleg.hpp:26:14: note: in instantiation of member
      function 'stan::math::dogleg(const Eigen::VectorXd &, const
      algebraEq_functor, const jacobian_functor)::hybrj_functor::df' requested
      here
      struct hybrj_functor : NLOFunctor {
             ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:43:23: note: in instantiation of
      function template specialization 'stan::math::dogleg<algebraEq_functor,
      jacobian_functor>' requested here
  theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor());
                      ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:31:8: note: candidate constructor
      (the implicit copy constructor) not viable: no known conversion from
      'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to
      'const jacobian_functor' for 1st argument
struct jacobian_functor {
       ^
test/unit/math/prim/mat/fun/dogleg_test.cpp:31:8: note: candidate constructor
      (the implicit default constructor) not viable: requires 0 arguments, but 1
      was provided
2 errors generated.
make: *** [test/unit/math/prim/mat/fun/dogleg_test.o] Error 1
make test/unit/math/prim/mat/fun/dogleg_test failed
exit now (02/03/17 08:46:46 EST)
alashworth commented 5 years ago

Comment by sakrejda Friday Feb 03, 2017 at 14:02 GMT


Are you sure it's on the branch? I'm not finding the test file in stan-dev/math

On Fri, Feb 3, 2017 at 8:50 AM Charles Margossian notifications@github.com wrote:

I just uploaded the code on feature/dogleg. The test is dogleg_test.cpp. I'm pretty sure I have the order right (changing the order did not fix the bug). I'll keep digging, but in the meantime, here's the full error message:

In file included from test/unit/math/prim/mat/fun/dogleg_test.cpp:2: ./stan/math/prim/mat/fun/dogleg.hpp:29:18: error: no matching conversion for functional-style cast from 'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to 'algebraEq_functor' fvec = F1(x); ^~~~ ./stan/math/prim/mat/fun/dogleg.hpp:26:14: note: in instantiation of member function 'stan::math::dogleg(const Eigen::VectorXd &, const algebraEq_functor, const jacobian_functor)::hybrj_functor::operator()' requested here struct hybrj_functor : NLOFunctor { ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:43:23: note: in instantiation of function template specialization 'stan::math::dogleg<algebraEq_functor, jacobian_functor>' requested here theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor()); ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:14:8: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to 'const algebraEq_functor' for 1st argument struct algebraEq_functor { ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:14:8: note: candidate constructor (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided In file included from test/unit/math/prim/mat/fun/dogleg_test.cpp:2: ./stan/math/prim/mat/fun/dogleg.hpp:33:18: error: no matching conversion for functional-style cast from 'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to 'jacobian_functor' fjac = F2(x); ^~~~ ./stan/math/prim/mat/fun/dogleg.hpp:26:14: note: in instantiation of member function 'stan::math::dogleg(const Eigen::VectorXd &, const algebraEq_functor, const jacobian_functor)::hybrj_functor::df' requested here struct hybrj_functor : NLOFunctor { ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:43:23: note: in instantiation of function template specialization 'stan::math::dogleg<algebraEq_functor, jacobian_functor>' requested here theta = stan::math::dogleg(x, algebraEq_functor(), jacobian_functor()); ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:31:8: note: candidate constructor (the implicit copy constructor) not viable: no known conversion from 'const Eigen::VectorXd' (aka 'const Matrix<double, Dynamic, 1>') to 'const jacobian_functor' for 1st argument struct jacobian_functor { ^ test/unit/math/prim/mat/fun/dogleg_test.cpp:31:8: note: candidate constructor (the implicit default constructor) not viable: requires 0 arguments, but 1 was provided 2 errors generated. make: *** [test/unit/math/prim/mat/fun/dogleg_test.o] Error 1 make test/unit/math/prim/mat/fun/dogleg_test failed exit now (02/03/17 08:46:46 EST)

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-277250866, or mute the thread https://github.com/notifications/unsubscribe-auth/AAfA6YUCsWxm59925Oj_MiI6xScwxAigks5rYzC8gaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by charlesm93 Friday Feb 03, 2017 at 14:05 GMT


Yes, positive. https://github.com/stan-dev/math/blob/feature/dogleg/test/unit/math/prim/mat/fun/dogleg_test.cpp

alashworth commented 5 years ago

Comment by sakrejda Friday Feb 03, 2017 at 14:24 GMT


Line 29 in https://github.com/stan-dev/math/blob/feature/dogleg/stan/math/prim/mat/fun/dogleg.hpp

F1 is being treated as the type name not as the argument name (that would be the functor) so it tries to do a cast rather than producing an Eigen vector like you want. I'm not sure why the arguments are not named since that would solve the problem but maybe there's more magic going on than I realize here. In any case, what's happening on line 29 is understandable and not what you want. K

On Fri, Feb 3, 2017 at 9:05 AM Charles Margossian notifications@github.com wrote:

Yes, positive.

https://github.com/stan-dev/math/blob/feature/dogleg/test/unit/math/prim/mat/fun/dogleg_test.cpp

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/stan-dev/stan/issues/2023#issuecomment-277253926, or mute the thread https://github.com/notifications/unsubscribe-auth/AAfA6WMYiEHvmPG_PhupWXRB7rx79AOFks5rYzQtgaJpZM4Jqgvr .

alashworth commented 5 years ago

Comment by bgoodri Friday Feb 03, 2017 at 14:43 GMT


It was intended to work as in Eigen's tests https://bitbucket.org/eigen/eigen/src/5430cad5531274d51024d3950d2fa511aeec5055/unsupported/test/NonLinearOptimization.cpp?at=default&fileviewer=file-view-default but apparently something went wrong.