RobotLocomotion / drake

Model-based design and verification for robotics.
https://drake.mit.edu
Other
3.19k stars 1.25k forks source link

Support for Transfer Function models of LTI systems #20159

Open RussTedrake opened 10 months ago

RussTedrake commented 10 months ago

I'd like to have a TransferFunction class which supports basic operations such as the following commands that are available in Matlab:

% Transfer function commands
s = tf('s')
H = (1/(s+0.1))*(1/s)*(1/s)
pole(H)
step(H)
bode(H)
margin(H)
G = feedback(1000/(s+1),1)
rlocus((s+8)/((s+20)*(s*s+2*s+1)))
step(feedback((60*(s+16)/((s+20)*(s*s+2*s+1))),1))
bode(feedback((60*(s+16)/((s+20)*(s*s+2*s+1))),1))
margin(60*(s+16)/((s+20)*(s*s+2*s+1)))

Additionally, we would want to convert to/from the existing LinearSystem (state-space) models.

Here is an example of a lab that could be ported to Drake if we can land this support.

My initial thinking is that this class would not be a LeafSystem... it doesn't provide the time-domain semantics to support that abstraction. One would convert it to a LinearSystem to work with it in the systems framework. (Basic rules of composition could be implemented directly on the TransferFunction class; we could make a DiagramBuilder-type workflow eventually, but I wouldn't worry about that for now).

I would expect that the internal representation of the TransferFunction would be use our existing RationalFunction. I think we'll need to add the basic (univariate) root finding algorithm to symbolic::Polynomial. Wikipedia says "For finding all the roots, arguably the most reliable method is the Francis QR algorithm computing the eigenvalues of the Companion matrix corresponding to the polynomial, implemented as the standard method[1] in MATLAB."

RussTedrake commented 10 months ago

I've realized that I will need to add some support for complex numbers to symbolic::Expression. Minimally, that will involve a new constant, j, and a ComplexEvaluate() method that returns Eigen::VectorXcd instead of Eigen::VectorXd. That seems reasonable.

My simple initial implementation of state-space (which I'll drop here) to transfer function will need to wait for that to land first.

MatrixX<RationalFunction> ConvertStateSpaceToTransferFunction(
    const LinearSystem<double>& sys) {
  if (sys.num_inputs() == 0 || sys.num_outputs() == 0) {
    // Then we have an empty transfer function.
    return MatrixX<RationalFunction>(sys.num_outputs(), sys.num_inputs());
  }

  // Y(s)/U(s) = C (sI - A)^-1 B + D or 
  // Y(z)/U(z) = C (zI - A)^-1 B + D.

  // To keep this in rational form, for now we only implement the simplest case
  // where A is diagonalizable (the more general can be done by using Jordan
  // normal forms).
  // A = P D P^{-1} => (sI - A)^-1 = P (sI - D)^-1 P^{-1}. 
  Eigen::ComplexEigenSolver<Eigen::MatrixXd> solver(sys.A());
  Eigen::MatrixXcd P = solver.eigenvectors();
  Eigen::MatrixXcd lambda = solver.eigenvalues();
  Eigen::MatrixXcd Pinv = P.inverse();
  // Check if the matrix was actually diagonalizable.
  if ((sys.A() - P * lambda.asDiagonal() * P.inverse()).isZero(1e-8)) {
    throw std::runtime_error("The state-space to transfer function method currently assumes that sys.A() must be diagonalizable.");
  }
  RationalFunction s_or_z{sys.time_period() > 0 ? TransferFunction::z()
                                                : TransferFunction::s()};
  MatrixX<RationalFunction> sI_minus_D_inverse =
      (1.0 / (s_or_z - lambda.array())).matrix().asDiagonal();

  MatrixX<RationalFunction> H =
      sys.C() * P * sI_minus_D_inverse * Pinv * sys.B() + sys.D();
  return H;
}
jwnimmer-tri commented 10 months ago

(I'll use Python terminology in my below...)

Would ComplexEvaluate(env) take the environment as Dict[Variable, float] or Dict[Variable, np.complex]?

On a related note, I'll be adding AutoDiffXd as another cell type to Expression, and I've been working on how to spell the (many) API amendments necessary to the Evaluate(...) & Substitute(...) functions to handle the additional types. We also have a tranche of batch-evaluation functions on deck as well (for multicore, with Parallellism).

My current draft has it go like Evaluate(Dict[Variable, T]) => T with T as any default scalar. Whatever type the user provides for the variable values is the type they get back. If they give it float substitutions, they get a float back. If they give it AutoDiffXd substitutions, they get an AutoDiffXd back. This also subsumes Substitute (by "evaluating" with T=Expression).

If the computed value cannot fit into the return type, it's an exception. For example, if the user has a Dict[Variable, float] but there are AutoDiffXd constants inside the expression tree, calling Evaluate(Dict[Variable, float]) -> float would throw (since the return type cannot express a gradient). In that case, the user would pass a second kwarg T=AutoDiffXd to explicitly specify the desired return type. (In C++, the T would be a template argument; in Python it would be the kwarg.) This also subsumes the "throw if evaluating has any variables not in the environment" current failure condition.

I think we could extend that idea to allow T=np.complex in Evaluate() as well.

RussTedrake commented 10 months ago

Nice! I think you are right that it would be Dict[Variable, np.complex]. The variable will always be s, or z here -- they are certainly complex.

jwnimmer-tri commented 10 months ago

BTW I think we might also need to add Variable::Type::COMPLEX.