Closed briha closed 1 year ago
Hi, this factor 10,000 is not something that should be there, so thanks for posting.
There are two possible (broadly) reasons why this could be happening:
One question for you, before we dive into it further:
The different setups are taking different steps, and as a result a different number of function evaluations. Have you checked whether the results are at the same epochs, and whether the same number of function evaluations are performed in the two cases?
This is a good point. I have not checked the function evaluations, as they are integrating the same trajectory with the same integration settings. That is clearly where the difference comes from, although I do not know why that is (all using the RKDP_87 from tudat): | steps | runtime | |
---|---|---|---|
Tudat | 127836 | 300.14 s | |
"manually" | 234 | 66.87 ms | |
pykep | 185 | 134.57 ms |
I included pykep to show the Python overhead, as it is implementing the same dynamics in Python (in Cartesian instead of MEE).
A single function evaluation takes much longer in the Python implementation. To assess this, we'd need to see the full implementation of a single state derivative.
Has can be seen by just looking at the difference in steps, this is not the issue at hand. To clarify nonetheless: Both functions use the same derivative function, implemented in C++. It is exposed to python and passed to tudat through tudatpy, and thus has to go from C++ –> Python –> C++. I do not know whether the overhead added this way is significant, but, again, if it is, it should affect for both implementations equally.
The fullstate derivative is computed in the following way (within the context of a class):
Eigen::Vector14d computeFullstateDerivative() {
Eigen::Vector14d fullstateDerivative;
fullstateDerivative << computeStateDerivative(), computeCostateDerivative();
return fullstateDerivative;
}
which should make evident how the costate derivative is computed.
Have you tried simplifying (for testing purposes) or removing altogether, the custom models to check the influence on the run time and the number of function evaluations?
Only propagating kinematic state and mass, the problem seems to indeed mostly vanish (it runs in 690.14ms, taking 143 steps). Letting go of the costate-propagation means a constant thrust direction and magnitude, of course, which may even be zero, if that is the value at t=0.
Hi! When propagating a spacecraft with
custom_state
andmass
propagators, it takes forever (~200 seconds forrkdp_89
with $abs=rel=10^{-10}$, but comparable for other options, too) when doing it through tudatpy directly (using aSingleArcSimulator
). When using the exact same integrator manually, i.e.where the state-derivative represents the fullstate vector (i.e., state, mass, custom state) it takes about
0.02s
(a factor 10,000 difference).Here's an example of what my code looks like. This is not a minimal working example, but should give an idea of how I invoke the integration. Please let me know if anything is unclear (
DynamicsSimulator
,Spacecraft
,constants
, and a number of other classes, constants, and function are locally imported which is not shown here):The integration itself is invoked like this, and identical settings are used for when the integration routine is 'manually' called and also when tudat's
SingleArcSimulator
is used:The directives for the costates (i.e. the custom propagator) and the fullstate are both implemented in C++ and exposed to Python, through the
FastOptimalControlTrajectoryBase
class or one of its children.