stack-of-tasks / pinocchio

A fast and flexible implementation of Rigid Body Dynamics algorithms and their analytical derivatives
http://stack-of-tasks.github.io/pinocchio/
BSD 2-Clause "Simplified" License
1.78k stars 375 forks source link

Safety checks in Python #909

Closed gabrielebndn closed 4 years ago

gabrielebndn commented 4 years ago

Currently, safety checks like checking the length of the array or verifying that the Data object is consistent with the Model are only done through asserts, i.e. only in debug mode.

This leaves the users completely on their own when using the Release mode, but it is consistent with what Eigen does, and I think it should stay this way: if users want, they can still compile in Debug mode.

However, users are left completely powerless in Python, if they are using the released versions and they are not compiling it themselves. One solution would be to always enable the checks in Python, but I believe the best way is to add a global switch allowing to enable or disable the checks at will, something like pinocchio.setAssertionsEnabled(bool). This might also be useful in the future if we want to implement a shared library.

If we decide to go this way, then all safety assert statements should be replaced by a macro, defined as something like this:

#ifdef PINOCCHIO_RUNTIME_ASSERT
#  define PINOCCHIO_ASSERT(cond,msg) \
if(pinocchio::isAssertionsEnabled && !(cond)) { \
  throw std::runtime_error(msg); \
}
#else
#  define PINOCCHIO_ASSERT(cond,msg) assert((cond) && msg)
#endif 

with pinocchio::isAssertionsEnabled initially set to false, plus some interface to properly set it.

The main motivation for this is Python, but it would be available in C++ too. Python would be compiled by default with PINOCCHIO_RUNTIME_ASSERT defined.

What do you think?

gabrielebndn commented 4 years ago

Oh, I see Python actually does perform the checks! My bad, I did not remember that, I should have verified before. Still, what do you think of setting a switch?

gabrielebndn commented 4 years ago

Also, the checks are performed, but since they are made through asserts, this actually crashes the whole program and it quits the Python interpreter. I think this is not a desirable behavior in Python. Exceptions would be better, in my opinion.

gabrielebndn commented 4 years ago

Oh, I see Python actually does perform the checks! My bad, I did not remember that, I should have verified before. Still, what do you think of setting a switch?

Actually, this is what happens: with the apt-downloaded version (v2.1.8), it seems the asserts are enabled, but with the manually compiled version (v2.1.9, but I do not think it changed) they are not (NDEBUG is defined). I am I getting it right? What are the Python compilation options for the apt version? @nim65s, do you know anything about it? If I am getting it right, I find it confusing. The manually-compiled Release versions should be identical to the one actually release in apt, I think

gabrielebndn commented 4 years ago

This test

import pinocchio as pin

model = pin.buildSampleModelManipulator()
data = model.createData()
q = pin.utils.zero(model.nq+1)

pin.forwardKinematics(model,data,q)

Does not give me any error when I use the manually-compiled version. This means the assertions are disabled and NDEBUG is defined. I can see from the compiler output that -DNDEBUG is indeed part of the compilation command. But, when I run it with the apt-installed version, it fails with the following error

python: /opt/openrobots/include/pinocchio/algorithm/kinematics.hxx:108: void pinocchio::forwardKinematics(const pinocchio::ModelTpl<Scalar, Options, JointCollectionTpl>&, pinocchio::DataTpl<Scalar, Options, JointCollectionTpl>&, const Eigen::MatrixBase<Mat>&) [with Scalar = double; int Options = 0; JointCollectionTpl = pinocchio::JointCollectionDefaultTpl; ConfigVectorType = Eigen::Matrix<double, -1, 1>]: Assertion `q.size() == model.nq && "The configuration vector is not of right size"' failed.
Aborted (core dumped)

then the Python interpreter crashes with no possibility of recovery. This means the assertions are enabled and NDEBUG is not defined. I see two problems here: 1) The apt-released version should not be different from the default Release manually-compiled version, I think 2) The fact that the whole interpreter crashes is bad, in my opinion. This should be handled through exceptions, at least in Python

Also, I think it is nice to allow users to choose whether they want the checks or not, even in Python

jcarpent commented 4 years ago

I think the best usage is to expose the checker in Python and let the user call them if they need it. This is a safe strategy that would not interfere with the current scheme.

gabrielebndn commented 4 years ago

I think the best usage is to expose the checker in Python and let the user call them if they need it. This is a safe strategy that would not interfere with the current scheme.

What do you mean with "expose the checker"? model.check(data) is already exposed, if that's what you mean. This calls checkData(model,data), which is not exposed, but still the instance method is just the same. Also, this is not the only thing one might want to check: the most common errors concern the size of the input vectors. I do not think it can get quite cumbersome if users are forced to manually set up the checks on their own, and it is not user-friendly to tell them "you are on your own, do your homework", especially in Python

cmastalli commented 4 years ago

I would like to draw your attention in this nice assert library: https://github.com/gpakosz/PPK_ASSERT where at runtime you're able to ignore asserts.

cmastalli commented 4 years ago

@gabrielebndn we could also add Python wrappers of Python bindings where:

  1. we could add Python assertions
  2. improve the automatic description of boost python which shows c++ objects (I personally hate it :)

This is very easy job, but it requires a lot of very boring work to wrap each function.

gabrielebndn commented 4 years ago

I would like to draw your attention in this nice assert library: https://github.com/gpakosz/PPK_ASSERT where at runtime you're able to ignore asserts.

I am looking at it. It seems nice

gabrielebndn commented 4 years ago

@gabrielebndn we could also add Python wrappers of Python bindings where:

  1. we could add Python assertions
  2. improve the automatic description of boost python which shows c++ objects (I personally hate it :)

This is very easy job, but it requires a lot of very boring work to wrap each function.

Not only this would be long and cumbersome, it would also be harder to maintain. I would personally prefer to replace all assert with a configurable PINOCCHIO_ASSERT macro, which can be done very quickly, and is cleaner. Plus, I like the automatic boost-python description :)

jcarpent commented 4 years ago

One solution is would be to redefine the PINOCCHIO_ASSERT in the top header file of Python bindings. For that, we just need to do something like:

#ifndef PINOCCHIO_ASSERT
  #define PINOCCHIO_ASSERT(cond,msg) assert((cond) && msg)
#endif

This will allow more generic features than the one you suggested before. @gabrielebndn What do you think?

wxmerkt commented 4 years ago

The fact that the whole interpreter crashes is bad, in my opinion. This should be handled through exceptions, at least in Python

I agree and it's particularly annoying in Jupyter notebooks. assert crashes Python (and I expect, by default, them not to be active when compiling in Release mode). If you want to be sure it's checked, perhaps an if statement (inexpensive) that throws an exception (expensive, but then otherwise you'll crash and can't catch), which is passed through the binding to Python, may be a better option?

jcarpent commented 4 years ago

I agree and it's particularly annoying in Jupyter notebooks. assert crashes Python (and I expect, by default, them not to be active when compiling in Release mode)

Indeed, assert should not appear in Release mode.

cmastalli commented 4 years ago

With the header library that I shared, you can at runtime avoid a crash of an assert. This is because sometime debugging things is convenient to ignore an assert

wxmerkt commented 4 years ago

Would if (q.size() != nq_) throw std::runtime_error("Wrong size"); be inconvenient to replace assert? Is the cost of the if too much to add?

jcarpent commented 4 years ago

It is not a good idea to add throwing inside efficient algorithms, it would break the predictability. This should have to be done with a checker function defined for each algorithm I would suggest.

cmastalli commented 4 years ago

It is also still possible to include the throw runtime only when we compile the Python bindings. However, it looks a bit dirty work.

I'm still not sure if this topic is important??? My opinion is "if an user wants to debug its code, then he/she needs to know that Pinocchio has to be compile in Debug mode (as every Christian in the planet)".

I like pretty much plain things where users are exposed to every aspect of the development. So, a solution could be just to add a tutorial that explains how to troubleshoot your algorithm.

jmirabel commented 4 years ago

One solution is would be to redefine the PINOCCHIO_ASSERT in the top header file of Python bindings.

Wouldn't you break ABI compatibility in a case like :

jcarpent commented 4 years ago

I don't think so. Why do you think so? Any hints?

jmirabel commented 4 years ago

You have no way of knowing which pinocchio::algo implementation will be found (in pinocchio Python bindings or in libA.so).

It shouldn't be hard to make it not break the ABI compatibility but one should be careful with such things.

wxmerkt commented 4 years ago

If the requirement to debug your algorithm is to compile every library in Debug mode locally, the efficiency of using dependencies as binaries goes away. E.g., when I develop Crocoddyl I want to be able to install Pinocchio binaries. If I make a mistake passing the wrong vector to say pinocchio::forwardKinematics (e.g., state instead of configuration), I want to see a throw, realise my mistake, and fix it.

It is not a good idea to add throwing inside efficient algorithms, it would break the predictability. This should have to be done with a checker function defined for each algorithm I would suggest.

I disagree. You will only have it throwing during development when you may get sizes wrong (and the results are garbage if you pass wrong sized vectors). For real deployment, it is just an if-check that won't throw (unless things at runtime change - and you do want to know that!). You can handle an exception, but you can't happen a segmentation fault.

jcarpent commented 4 years ago

I disagree. You will only have it throwing during development when you may get sizes wrong (and the results are garbage if you pass wrong sized vectors). For real deployment, it is just an if-check that won't throw (unless things at runtime change - and you do want to know that!). You can handle an exception, but you can't happen a segmentation fault.

Sorry @wxmerkt, I do not really understand your thoughts. Can you provide more details or an example to help my understanding?

VladimirIvan commented 4 years ago

It is not a good idea to add throwing inside efficient algorithms, it would break the predictability. This should have to be done with a checker function defined for each algorithm I would suggest.

This is not correct. Asserts should be used for checking inputs that are private and can always be checked at compile time. If the input can be changed at runtime you should use exceptions. If the input is wrong, the code will crash no matter if you except or assert. The condition that triggers the exception will not add any significant performance hit and the CPUs are specifically build with branch prediction algorithms to remove all performance hits from these cases.

Here is a quote from Microsoft:

The exception mechanism has a very minimal performance cost if no exception is thrown. If an exception is thrown, the cost of the stack traversal and unwinding is roughly comparable to the cost of a function call. Additional data structures are required to track the call stack after a try block is entered, and additional instructions are required to unwind the stack if an exception is thrown. However, in most scenarios, the cost in performance and memory footprint is not significant. The adverse effect of exceptions on performance is likely to be significant only on very memory-constrained systems, or in performance-critical loops where an error is likely to occur regularly and the code to handle it is tightly coupled to the code that reports it. In any case, it's impossible to know the actual cost of exceptions without profiling and measuring. Even in those rare cases when the cost is significant, you can weigh it against the increased correctness, easier maintainability, and other advantages that are provided by a well-designed exception policy.

All discussions on this topic agree on that: 1, 2, 3, 4

wxmerkt commented 4 years ago
q = some vector of size nq
v = some vector of size nv
x = [q; v]

pinocchio.centerOfMass(model, data, x) <-- mistake here

During development, mistakes may occur, e.g., I accidentally pass x instead of q. If assert is active (assumed if compiled in Debug), this will crash Python with the Eigen vector mismatch assert. This is a segmentation fault and I cannot catch it. This assert safety is absent if compiled in Release (as the default parameters for a local installation automatically remove assert as highlighted in the original post of this issue). As such, this issue would be "silent" and return garbage (by messing up memory) - unless debugged in Debug mode.

If we replace assert(input.size() == nq_ && "Wrong size"); with if (input.size() != nq_) throw std::runtime_exception("Wrong size");, we will simply get a Runtime Error which we can catch. During development, we can see this and quickly fix it - even if the library is compiled in Release mode or form binary. Plus, when we work interactively in a Jupyter notebook we just get an issue, fix it and don't have to restart all the work. During deployment, there is no performance hit and it won't throw an exception at all.

To sum it up: I propose to replace all assert with if (input.size() != desired_size) throw std::runtime_exception("Message");. There is no need for an extra library, dependency or macro, the C++ features (and the Python bindings) will handle it by design.

gabrielebndn commented 4 years ago

@VladimirIvan

Asserts should be used for checking inputs that are private and can always be checked at compile time.

Yes, that is the general rule of thumb. In C++, I think the checks can be performed at compile-time, therefore I think assertions are good for that. This is not true in Python, and it would not be true should we decide to offer a pre-compiled shared Pinocchio library in the future; for such cases, exceptions are the way to go.

If the input is wrong, the code will crash no matter if you except or assert.

Totally true. I do not understand the point on predictability

The exception mechanism has a very minimal performance cost if no exception is thrown

what would cost is not the assert vs exception mechanism itself. Rather, what is potentially costly is the act of performing the checks itself. model.check(data) performs lots of tests, I think. With an assert, that line does not even exist and no checks are performed at all.

wxmerkt commented 4 years ago

In C++, I think the checks can be performed at compile-time

Only if you have fixed-size matrices or vectors. Using VectorXd, this is not applicable.

model.check(data) performs lots of tests, I think. With an assert, that line does not even exist and no checks are performed at all.

Checks that do not crash the system such as input size should be super-cheap to perform. I agree a full model check is expensive (and if someone messes with the model, good luck), a check on the x,q,v,a, etc (a much more common error source) should be essentially free.

jcarpent commented 4 years ago

I think the argument of Microsoft about not affecting the performances might be incorrect when algorithms are working at the micro-second scales like ours. I will implement a quick bench to try it when I have some free times, before doing the complete change.

gabrielebndn commented 4 years ago

Only if you have fixed-size matrices or vectors. Using VectorXd, this is not applicable.

True. But still, I think the assertion mechanism works well for C++, I can think of very few cases where the input size can actually vary depending on program execution

jcarpent commented 4 years ago

In all cases, I'm not against at all throwing in Python, as it might help the beginners and the development processes such as Crocoddyl or other libraries based on Pinocchio. So, we should think about such a solution.

gabrielebndn commented 4 years ago

One solution is would be to redefine the PINOCCHIO_ASSERT in the top header file of Python bindings. For that, we just need to do something like:

#ifndef PINOCCHIO_ASSERT
  #define PINOCCHIO_ASSERT(cond,msg) assert((cond) && msg)
#endif

This will allow more generic features than the one you suggested before. @gabrielebndn What do you think?

This code would be in C++, right? And it would be re-defined in Python as exception-throwing? I think it might work. But still, what if we decide to implement the Pinocchio shared library? Then users would have no choice whether they want the checks or not, and a mechanism like the one I am suggesting will be needed. Plus, I like the idea of providing a switch.

wxmerkt commented 4 years ago

Well when we develop on top of Pinocchio in other software, this occurs as soon as you do not compile everything in Debug.

A good rule of thumb from link [1] Vlad posted:

Exceptions are used for run-time error conditions (IO errors, out of memory, can't get a database connection, etc.). Assertions are used for coding errors (this method doesn't accept nulls, and the developer passed one anyway). For libraries with public classes, throw exceptions on the public methods (because it makes sense to do so). Assertions are used to catch YOUR mistakes, not theirs.

So, checking the model would be a case for an assert (Pinocchio core development error) and checking a q/v/a size is an if (public function user error). In practice, the "cost" if everything is correct should reduce to an if on a size() vs a stored member (i.e., comparing two ints). The exception won't be thrown. It is very possible that a compiler or the branch prediction may just skip the if altogether.

Thank you very much for looking into this and benchmarking @jcarpent.

Edit: The plus of this is that Python will just naturally throw and be handled automatically without any extra code or complexity.

gabrielebndn commented 4 years ago

I think the argument of Microsoft about not affecting the performances might be incorrect when algorithms are working at the micro-second scales like ours. I will implement a quick bench to try it when I have some free times, before doing the complete change.

With my solution, there would be no run-time overhead in standard Release mode as compiled from headers. You would only have the checks if you require them by defining PINOCCHIO_RUNTIME_ASSERT. And you would have them in Python, too. But in Python we already have them, the point is that they behave in a bad way if they fail. Plus, you could decide to disable them with a switch.

jcarpent commented 4 years ago

@wxmerkt Following your suggestions, it should be only done for the algorithms folder. We might have some issues in other places (like JointComposite) but this is not at the top priority. @gabrielebndn Your solution seems to be one good starting point, except for the global variable which should be handled with a static variable inside a function, in similar way it is done in Eigen for set_is_malloc_allowed.

gabrielebndn commented 4 years ago

@gabrielebndn Your solution seems to be one good starting point, except for the global variable which should be handled with a static variable inside a function, in similar way it is done in Eigen for set_is_malloc_allowed.

Yes, Eigen's set_is_malloc_allowed is an inspiration for the switch. The only difference is that Eigen still raises an assert in case it fails, which is not good in Python and does not work when NDEBUG is defined. I cannot think of any other way to implement a global switch other than with a static variable. I was thinking of doing it employing the singleton paradigm, but it would be the same. Your point is you do not want a static variable because it is not thread-safe, I guess? I understand. But is it important? Ideally, this switch should be configured once at the beginning of the program and let alone.

jcarpent commented 4 years ago

This is mandatory if you have several projects using Pinocchio. You did not want each one to define its own global variable, but rather to call it via a function containing a static variable. This is very similar to a singleton indeed, but simpler in the current context.

gabrielebndn commented 4 years ago

This is mandatory if you have several projects using Pinocchio. You did not want each one to define its own global variable, but rather to call it via a function containing a static variable. This is very similar to a singleton indeed, but simpler in the current context.

Yes, but would the use of a static variable be that bad in this case? I think that for this use case it would be fine.

jcarpent commented 4 years ago

No, I don't think so.

jcarpent commented 4 years ago

In addition, and to simplify the current API in terms of Macro management of Pinocchio, I would be in favor of:

gabrielebndn commented 4 years ago

I'm still not sure if this topic is important??? My opinion is "if an user wants to debug its code, then he/she needs to know that Pinocchio has to be compile in Debug mode (as every Christian in the planet)".

But users would not be able to choose Release/Debug mode if they are using the bindings from apt. This would force you to either always have the checks or never have them (and force them to compile Pinocchio from source if they want them, in this last case).

gabrielebndn commented 4 years ago

@jcarpent, sorry, I am not following you.

  • keeping PINOCCHIO_ASSERT as it is done now

We do not have a macro called PINOCCHIO_ASSERT at the moment. We have PINOCCHIO_ASSERT_MATRIX_SPECIFIC_SIZE

#define PINOCCHIO_ASSERT_MATRIX_SPECIFIC_SIZE(type,M,nrows,ncols)              \
  EIGEN_STATIC_ASSERT(   (type::RowsAtCompileTime == Eigen::Dynamic || type::RowsAtCompileTime == nrows) \
                      && (type::ColsAtCompileTime == Eigen::Dynamic || type::ColsAtCompileTime == ncols),\
                      THIS_METHOD_IS_ONLY_FOR_MATRICES_OF_A_SPECIFIC_SIZE);    \
  assert(M.rows()==nrows && M.cols()==ncols);

which would not change a lot and is not systematically used (most algorithms use plain assert). And PINOCCHIO_STATIC_ASSERT

#define PINOCCHIO_STATIC_ASSERT(condition,msg)                                 \
  { int msg[(condition) ? 1 : -1]; /*avoid unused-variable warning*/ (void) msg; }

which only works for compile-time constants.

  • adding PINOCCHIO_THROWING_ASSERT which will implement the throwing in Release mode and use the classic PINOCCHIO_ASSERT in the Debug mode.

I really do not understand this suggestion. I thought you were against the checks in Release mode, and against exceptions in C++... And now you are suggesting to throw exceptions in Release mode? Then we might as well throw exceptions in Debug mode too! And what about Python?

jcarpent commented 4 years ago

I mean by PINOCCHIO_ASSERT which can be a static assert or something that the user may want to overload for personal reasons. So, using a Macro is a well-defined way. On the other side, defining PINOCCHIO_THROWING_ASSERT will enable us to optionally throw for certain quantities like the vector dimensions mentioned by @wxmerkt.

gabrielebndn commented 4 years ago

I mean by PINOCCHIO_ASSERT which can be a static assert or something that the user may want to overload for personal reasons. So, using a Macro is a well-defined way. On the other side, defining PINOCCHIO_THROWING_ASSERT will enable us to optionally throw for certain quantities like the vector dimensions mentioned by @wxmerkt.

I really don't see the point of having exceptions in Release mode and assertions in Debug mode. If you have exceptions in Release mode, I do not see the point of turning them to assertions in Debug mode, we might as well keep them as exceptions, it would get away with all complications and be more consistent. Besides, you seem to suggest that the checks will be performed in Release mode, which I thought you wanted to avoid.

wxmerkt commented 4 years ago

Extra macros and definitions add complexity.

I agree that assert as currently is the right way to go for internal quantities such as models that affect Pinocchio developers.

For public facing methods/functions, is there an argument against the if (input.size() != member_) throw std::runtime_exception("wrong size");. I would not define a macro and I would not switch between Debug and Release - simplicity wins. The if is negligible if input is correct and there won't be an exception. If there is an issue with dependent code, it will be graceful.

Besides, you seem to suggest that the checks will be performed in Release mode, which I thought you wanted to avoid.

The if should be negligible and branch prediction may skip it altogether.

wxmerkt commented 4 years ago

One further argument regarding efficiency I'd like to add:

The robotpkg binaries currently include the assert - an if should be the same cost or faster. This means that they do perform these checks for all of them at runtime already (plus whatever else is activated by DEBUG). On my computer there is a 3x difference in runtime between me pulling the binaries vs compiling on the machine (for centerOfMass/centerOfMassJacobian). I think adding an if with throwing on user-input vectors/matrices and then compiling without the DEBUG flag should be more performant than the currently packaged version.

jcarpent commented 4 years ago

@gabrielebndn I just want to make the distinction between internal assert as @wxmerkt mentioned and the one that might THROW if the compilation flag is active (which should be by default ON if there is no drastic overhead -- which needs to be checked). But for benchmarking the algorithms in general, I would be in favor of silent them.

@wxmerkt I think here a MACRO will simplify the reading of the code. And if one day we change our minds, it will ease the process of stepping backward.

jcarpent commented 4 years ago

One further argument regarding efficiency I'd like to add:

The robotpkg binaries currently include the assert - an if should be the same cost or faster. This means that they do perform these checks for all of them at runtime already (plus whatever else is activated by DEBUG). On my computer there is a 3x difference in runtime between me pulling the binaries vs compiling on the machine (for centerOfMass/centerOfMassJacobian). I think adding an if with throwing on user-input vectors/matrices and then compiling without the DEBUG flag should be more performant than the currently packaged version.

I strictly do not understand why robotpkg is doing this thing in the back of the user. @nim65s Do you have any clue on this issue?

gabrielebndn commented 4 years ago

@gabrielebndn I just want to make the distinction between internal assert as @wxmerkt mentioned and the one that might THROW if the compilation flag is active (which should be by default ON if there is no drastic overhead -- which needs to be checked).

But this has nothing to do with Release vs Debug mode

But for benchmarking the algorithms in general, I would be in favor of silent them.

Yes. But then you would need to control this behavior by defining an extra macro.

@wxmerkt I think here a MACRO will simplify the reading of the code. And if one day we change our minds, it will ease the process of stepping backward.

Agree on this: defining PINOCCHIO_ASSERT is easy, its use would intuitive and legible, and the macro would be more flexible.

wxmerkt commented 4 years ago

Perhaps, prior to making a final decision, let's gather numbers on whether the if actually adds an overhead. If it turns out to be negligible in benchmarks, there is no need for flags to turn on/off and it can be left on by default.

nim65s commented 4 years ago

I strictly do not understand why robotpkg is doing this thing in the back of the user. @nim65s Do you have any clue on this issue?

I have already noticed and reported that -O3 -DNDEBUG is stripped from CXX_COMPILER_FLAGS_NDEBUG at some point, but we don't understand why yet. I'll look more deeply into this as soon as I'm back from my vacation :)

wxmerkt commented 4 years ago

Current devel, MinSizeRel

Results

$ ./timings
nq = 36
RNEA =      12.9434 us
NLE =       12.883 us
NLE via RNEA =      12.9513 us
CRBA =      24.4017 us
CRBA minimal =      16.9753 us
computeAllTerms =       51.3335 us
Branch Induced Sparsity Cholesky =  9.23815 us
Dense Eigen Cholesky =  9.37849 us
Jacobian =  4.9477 us
Jacobian Time Variation =   10.5291 us
COM+Jcom =  6.76057 us
COM+vCOM+aCOM =     12.2395 us
Zero Order Kinematics =     4.29077 us
First Order Kinematics =    6.80566 us
Second Order Kinematics =   10.1073 us
CCRBA =     10.1268 us
ABA =   31.739 us
Empty Forward Pass =    0.2308 us
Coriolis Matrix =   56.6405 us
Minv =  48.1577 us
--

Current devel, assert redefined as if(), MinSizeRel

In macros.hpp, this is the change:

#include <exception>

#undef assert
#define assert(cond) if (!cond) { throw std::runtime_error("sad"); }

Results

./timings
nq = 36
RNEA =      12.6389 us
NLE =       12.7326 us
NLE via RNEA =      12.4875 us
CRBA =      24.3746 us
CRBA minimal =      16.8823 us
computeAllTerms =       50.7045 us
Branch Induced Sparsity Cholesky =  9.12997 us
Dense Eigen Cholesky =  9.41454 us
Jacobian =  4.73732 us
Jacobian Time Variation =   10.5936 us
COM+Jcom =  6.75134 us
COM+vCOM+aCOM =     12.2407 us
Zero Order Kinematics =     4.29914 us
First Order Kinematics =    6.75247 us
Second Order Kinematics =   9.79773 us
CCRBA =     9.74422 us
ABA =   31.0411 us
Empty Forward Pass =    0.22624 us
Coriolis Matrix =   56.5723 us
Minv =  48.065 us
--