codemirror / codemirror5

In-browser code editor (version 5, legacy)
http://codemirror.net/5/
MIT License
26.84k stars 4.97k forks source link

Error in v3 rst mode #2850

Closed dborowitz closed 10 years ago

dborowitz commented 10 years ago

In the following snippet, I get the following error in my JS console trying to render: Uncaught TypeError: undefined is not a function rst.js:52

This is tip of the v3 branch, 57e7ed7177b24664e57123e97a29800b2b205859.

Google Chrome   37.0.2062.120 (Official Build 281580) 
OS  Linux 
Blink   537.36 (@181352)
JavaScript  V8 3.27.34.17
Flash   15.0.0.152
<html>
<head>
  <link rel="stylesheet" href="lib/codemirror.css">
  <script src="lib/codemirror.js"></script>
  <script src="mode/python/python.js"></script>
  <script src="mode/stex/stex.js"></script>
  <script src="mode/rst/rst.js"></script>
</head>

<body>
  <!--
    The following code block is from:
    https://ceres-solver.googlesource.com/ceres-solver/+/b7d321f505e936b6c09aeb43ae3f7b1252388a95/docs/source/faqs.rst
    and is available under the following license:

    Ceres Solver - A fast non-linear least squares minimizer
    Copyright 2010, 2011, 2012 Google Inc. All rights reserved.
    http://code.google.com/p/ceres-solver/
    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright notice,
      this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright notice,
      this list of conditions and the following disclaimer in the documentation
      and/or other materials provided with the distribution.
    * Neither the name of Google Inc. nor the names of its contributors may be
      used to endorse or promote products derived from this software without
      specific prior written permission.
    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
    AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
    IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
    ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
    LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
    CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
    SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
    CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGE.
  -->
  <textarea id="rstexample">.. _chapter-tricks:

===================
FAQS, Tips &amp; Tricks
===================

Answers to frequently asked questions, tricks of the trade and general
wisdom.

Building
========

#. Use `google-glog &lt;http://code.google.com/p/google-glog&gt;`_.

   Ceres has extensive support for logging detailed information about
   memory allocations and time consumed in various parts of the solve,
   internal error conditions etc. This is done logging using the
   `google-glog &lt;http://code.google.com/p/google-glog&gt;`_ library. We
   use it extensively to observe and analyze Ceres's
   performance. `google-glog &lt;http://code.google.com/p/google-glog&gt;`_
   allows you to control its behaviour from the command line `flags
   &lt;http://google-glog.googlecode.com/svn/trunk/doc/glog.html&gt;`_. Starting
   with ``-logtostdterr`` you can add ``-v=N`` for increasing values
   of ``N`` to get more and more verbose and detailed information
   about Ceres internals.

   In an attempt to reduce dependencies, it is tempting to use
   `miniglog` - a minimal implementation of the ``glog`` interface
   that ships with Ceres. This is a bad idea. ``miniglog`` was written
   primarily for building and using Ceres on Android because the
   current version of `google-glog
   &lt;http://code.google.com/p/google-glog&gt;`_ does not build using the
   NDK. It has worse performance than the full fledged glog library
   and is much harder to control and use.

Modeling
========

#. Use analytical/automatic derivatives.

   This is the single most important piece of advice we can give to
   you. It is tempting to take the easy way out and use numeric
   differentiation. This is a bad idea. Numeric differentiation is
   slow, ill-behaved, hard to get right, and results in poor
   convergence behaviour.

   Ceres allows the user to define templated functors which will
   be automatically differentiated. For most situations this is enough
   and we recommend using this facility. In some cases the derivatives
   are simple enough or the performance considerations are such that
   the overhead of automatic differentiation is too much. In such
   cases, analytic derivatives are recommended.

   The use of numerical derivatives should be a measure of last
   resort, where it is simply not possible to write a templated
   implementation of the cost function.

   In many cases it is not possible to do analytic or automatic
   differentiation of the entire cost function, but it is generally
   the case that it is possible to decompose the cost function into
   parts that need to be numerically differentiated and parts that can
   be automatically or analytically differentiated.

   To this end, Ceres has extensive support for mixing analytic,
   automatic and numeric differentiation. See
   :class:`NumericDiffFunctor` and :class:`CostFunctionToFunctor`.

#. Putting `Inverse Function Theorem
   &lt;http://en.wikipedia.org/wiki/Inverse_function_theorem&gt;`_ to use.

   Every now and then we have to deal with functions which cannot be
   evaluated analytically. Computing the Jacobian in such cases is
   tricky. A particularly interesting case is where the inverse of the
   function is easy to compute analytically. An example of such a
   function is the Coordinate transformation between the `ECEF
   &lt;http://en.wikipedia.org/wiki/ECEF&gt;`_ and the `WGS84
   &lt;http://en.wikipedia.org/wiki/World_Geodetic_System&gt;`_ where the
   conversion from WGS84 from ECEF is analytic, but the conversion
   back to ECEF uses an iterative algorithm. So how do you compute the
   derivative of the ECEF to WGS84 transformation?

   One obvious approach would be to numerically
   differentiate the conversion function. This is not a good idea. For
   one, it will be slow, but it will also be numerically quite
   bad.

   Turns out you can use the `Inverse Function Theorem
   &lt;http://en.wikipedia.org/wiki/Inverse_function_theorem&gt;`_ in this
   case to compute the derivatives more or less analytically.

   The key result here is. If :math:`x = f^{-1}(y)`, and :math:`Df(x)`
   is the invertible Jacobian of :math:`f` at :math:`x`. Then the
   Jacobian :math:`Df^{-1}(y) = [Df(x)]^{-1}`, i.e., the Jacobian of
   the :math:`f^{-1}` is the inverse of the Jacobian of :math:`f`.

   Algorithmically this means that given :math:`y`, compute :math:`x =
   f^{-1}(y)` by whatever means you can. Evaluate the Jacobian of
   :math:`f` at :math:`x`. If the Jacobian matrix is invertible, then
   the inverse is the Jacobian of the inverse at :math:`y`.

   One can put this into practice with the following code fragment.

   .. code-block:: c++

      Eigen::Vector3d ecef; // Fill some values
      // Iterative computation.
      Eigen::Vector3d lla = ECEFToLLA(ecef);
      // Analytic derivatives
      Eigen::Matrix3d lla_to_ecef_jacobian = LLAToECEFJacobian(lla);
      bool invertible;
      Eigen::Matrix3d ecef_to_lla_jacobian;
      lla_to_ecef_jacobian.computeInverseWithCheck(ecef_to_lla_jacobian, invertible);

#. When using Quaternions, use :class:`QuaternionParameterization`.

   TBD

#. How to choose a parameter block size?

   TBD

Solving
=======

#. Choosing a linear solver.

   When using the ``TRUST_REGION`` minimizer, the choice of linear
   solver is an important decision. It affects solution quality and
   runtime. Here is a simple way to reason about it.

   1. For small (a few hundred parameters) or dense problems use
      ``DENSE_QR``.

   2. For general sparse problems (i.e., the Jacobian matrix has a
      substantial number of zeros) use
      ``SPARSE_NORMAL_CHOLESKY``. This requires that you have
      ``SuiteSparse`` or ``CXSparse`` installed.

   3. For bundle adjustment problems with up to a hundred or so
      cameras, use ``DENSE_SCHUR``.

   4. For larger bundle adjustment problems with sparse Schur
      Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This
      requires that you have ``SuiteSparse`` or ``CXSparse``
      installed.

   5. For large bundle adjustment problems (a few thousand cameras or
      more) use the ``ITERATIVE_SCHUR`` solver. There are a number of
      preconditioner choices here. ``SCHUR_JACOBI`` offers an
      excellent balance of speed and accuracy. This is also the
      recommended option if you are solving medium sized problems for
      which ``DENSE_SCHUR`` is too slow but ``SuiteSparse`` is not
      available.

      If you are not satisfied with ``SCHUR_JACOBI``'s performance try
      ``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL`` in that
      order. They require that you have ``SuiteSparse``
      installed. Both of these preconditioners use a clustering
      algorithm. Use ``SINGLE_LINKAGE`` before ``CANONICAL_VIEWS``.

#. Use `Solver::Summary::FullReport` to diagnose performance problems.

   When diagnosing Ceres performance issues - runtime and convergence,
   the first place to start is by looking at the output of
   ``Solver::Summary::FullReport``. Here is an example

   .. code-block:: bash

     ./bin/bundle_adjuster --input ../data/problem-16-22106-pre.txt

     iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
        0  4.185660e+06    0.00e+00    2.16e+07   0.00e+00   0.00e+00  1.00e+04       0    7.50e-02    3.58e-01
        1  1.980525e+05    3.99e+06    5.34e+06   2.40e+03   9.60e-01  3.00e+04       1    1.84e-01    5.42e-01
        2  5.086543e+04    1.47e+05    2.11e+06   1.01e+03   8.22e-01  4.09e+04       1    1.53e-01    6.95e-01
        3  1.859667e+04    3.23e+04    2.87e+05   2.64e+02   9.85e-01  1.23e+05       1    1.71e-01    8.66e-01
        4  1.803857e+04    5.58e+02    2.69e+04   8.66e+01   9.93e-01  3.69e+05       1    1.61e-01    1.03e+00
        5  1.803391e+04    4.66e+00    3.11e+02   1.02e+01   1.00e+00  1.11e+06       1    1.49e-01    1.18e+00

     Ceres Solver v1.10.0 Solve Report
     ----------------------------------
                                          Original                  Reduced
     Parameter blocks                        22122                    22122
     Parameters                              66462                    66462
     Residual blocks                         83718                    83718
     Residual                               167436                   167436

     Minimizer                        TRUST_REGION

     Sparse linear algebra library    SUITE_SPARSE
     Trust region strategy     LEVENBERG_MARQUARDT

                                             Given                     Used
     Linear solver                    SPARSE_SCHUR             SPARSE_SCHUR
     Threads                                     1                        1
     Linear solver threads                       1                        1
     Linear solver ordering              AUTOMATIC                22106, 16

     Cost:
     Initial                          4.185660e+06
     Final                            1.803391e+04
     Change                           4.167626e+06

     Minimizer iterations                        5
     Successful steps                            5
     Unsuccessful steps                          0

     Time (in seconds):
     Preprocessor                            0.283

       Residual evaluation                   0.061
       Jacobian evaluation                   0.361
       Linear solver                         0.382
     Minimizer                               0.895

     Postprocessor                           0.002
     Total                                   1.220

     Termination:                   NO_CONVERGENCE (Maximum number of iterations reached.)

  Let us focus on run-time performance. The relevant lines to look at
  are

   .. code-block:: bash

     Time (in seconds):
     Preprocessor                            0.283

       Residual evaluation                   0.061
       Jacobian evaluation                   0.361
       Linear solver                         0.382
     Minimizer                               0.895

     Postprocessor                           0.002
     Total                                   1.220

  Which tell us that of the total 1.2 seconds, about .3 seconds was
  spent in the linear solver and the rest was mostly spent in
  preprocessing and jacobian evaluation.

  The preprocessing seems particularly expensive. Looking back at the
  report, we observe

   .. code-block:: bash

     Linear solver ordering              AUTOMATIC                22106, 16

  Which indicates that we are using automatic ordering for the
  ``SPARSE_SCHUR`` solver. This can be expensive at times. A straight
  forward way to deal with this is to give the ordering manually. For
  ``bundle_adjuster`` this can be done by passing the flag
  ``-ordering=user``. Doing so and looking at the timing block of the
  full report gives us

   .. code-block:: bash

     Time (in seconds):
     Preprocessor                            0.051

       Residual evaluation                   0.053
       Jacobian evaluation                   0.344
       Linear solver                         0.372
     Minimizer                               0.854

     Postprocessor                           0.002
     Total                                   0.935

  The preprocessor time has gone down by more than 5.5x!.

Further Reading
===============

For a short but informative introduction to the subject we recommend
the booklet by [Madsen]_ . For a general introduction to non-linear
optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
seminal reference on least squares problems. [TrefethenBau]_ book is
our favorite text on introductory numerical linear algebra. [Triggs]_
provides a thorough coverage of the bundle adjustment problem.</textarea>
  <script>
    var rsteditor = CodeMirror.fromTextArea(document.getElementById("rstexample"), {mode: "text/x-rst"});
  </script>
</body>
</html>
dborowitz commented 10 years ago

I used a slightly incomplete URL where I got the code block from, the full URL is https://ceres-solver.googlesource.com/ceres-solver/+/b7d321f505e936b6c09aeb43ae3f7b1252388a95/docs/source/faqs.rst

marijnh commented 10 years ago

It looks like you forgot to load addon/mode/overlay.js

dborowitz commented 10 years ago

That changes the stack trace but doesn't fix it:

Uncaught AssertException: undefined rst.js:79
assert rst.js:79
to_normal rst.js:279
token rst.js:552
token overlay.js:38
processLine codemirror.js:4552
(anonymous function) codemirror.js:984
window.CodeMirror.LeafChunk.iterN codemirror.js:4874
window.CodeMirror.BranchChunk.iterN codemirror.js:4964
window.CodeMirror.BranchChunk.iterN codemirror.js:4964
window.CodeMirror.createObj.iter codemirror.js:4996
getStateBefore codemirror.js:983
getLineStyles codemirror.js:4540
buildLineContent codemirror.js:4592
measureLineWidth codemirror.js:1168
endOperation codemirror.js:1396
(anonymous function) codemirror.js:1455
CodeMirror codemirror.js:77
CodeMirror codemirror.js:48
window.CodeMirror.CodeMirror.fromTextArea codemirror.js:3769
(anonymous function) test.html:325

New complete test file:

<html>
<head>
  <link rel="stylesheet" href="lib/codemirror.css">
  <script src="lib/codemirror.js"></script>
  <script src="addon/mode/overlay.js"></script>
  <script src="mode/python/python.js"></script>
  <script src="mode/stex/stex.js"></script>
  <script src="mode/rst/rst.js"></script>
</head>

<body>
  <!--
    The following code block is from:
    https://ceres-solver.googlesource.com/ceres-solver/+/b7d321f505e936b6c09aeb43ae3f7b1252388a95/docs/source/faqs.rst
    and is available under the following license:

    Ceres Solver - A fast non-linear least squares minimizer
    Copyright 2010, 2011, 2012 Google Inc. All rights reserved.
    http://code.google.com/p/ceres-solver/
    Redistribution and use in source and binary forms, with or without
    modification, are permitted provided that the following conditions are met:
    * Redistributions of source code must retain the above copyright notice,
      this list of conditions and the following disclaimer.
    * Redistributions in binary form must reproduce the above copyright notice,
      this list of conditions and the following disclaimer in the documentation
      and/or other materials provided with the distribution.
    * Neither the name of Google Inc. nor the names of its contributors may be
      used to endorse or promote products derived from this software without
      specific prior written permission.
    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
    AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
    IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
    ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
    LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
    CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
    SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
    INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
    CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
    ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
    POSSIBILITY OF SUCH DAMAGE.
  -->
  <textarea id="rstexample">.. _chapter-tricks:

===================
FAQS, Tips &amp; Tricks
===================

Answers to frequently asked questions, tricks of the trade and general
wisdom.

Building
========

#. Use `google-glog &lt;http://code.google.com/p/google-glog&gt;`_.

   Ceres has extensive support for logging detailed information about
   memory allocations and time consumed in various parts of the solve,
   internal error conditions etc. This is done logging using the
   `google-glog &lt;http://code.google.com/p/google-glog&gt;`_ library. We
   use it extensively to observe and analyze Ceres's
   performance. `google-glog &lt;http://code.google.com/p/google-glog&gt;`_
   allows you to control its behaviour from the command line `flags
   &lt;http://google-glog.googlecode.com/svn/trunk/doc/glog.html&gt;`_. Starting
   with ``-logtostdterr`` you can add ``-v=N`` for increasing values
   of ``N`` to get more and more verbose and detailed information
   about Ceres internals.

   In an attempt to reduce dependencies, it is tempting to use
   `miniglog` - a minimal implementation of the ``glog`` interface
   that ships with Ceres. This is a bad idea. ``miniglog`` was written
   primarily for building and using Ceres on Android because the
   current version of `google-glog
   &lt;http://code.google.com/p/google-glog&gt;`_ does not build using the
   NDK. It has worse performance than the full fledged glog library
   and is much harder to control and use.

Modeling
========

#. Use analytical/automatic derivatives.

   This is the single most important piece of advice we can give to
   you. It is tempting to take the easy way out and use numeric
   differentiation. This is a bad idea. Numeric differentiation is
   slow, ill-behaved, hard to get right, and results in poor
   convergence behaviour.

   Ceres allows the user to define templated functors which will
   be automatically differentiated. For most situations this is enough
   and we recommend using this facility. In some cases the derivatives
   are simple enough or the performance considerations are such that
   the overhead of automatic differentiation is too much. In such
   cases, analytic derivatives are recommended.

   The use of numerical derivatives should be a measure of last
   resort, where it is simply not possible to write a templated
   implementation of the cost function.

   In many cases it is not possible to do analytic or automatic
   differentiation of the entire cost function, but it is generally
   the case that it is possible to decompose the cost function into
   parts that need to be numerically differentiated and parts that can
   be automatically or analytically differentiated.

   To this end, Ceres has extensive support for mixing analytic,
   automatic and numeric differentiation. See
   :class:`NumericDiffFunctor` and :class:`CostFunctionToFunctor`.

#. Putting `Inverse Function Theorem
   &lt;http://en.wikipedia.org/wiki/Inverse_function_theorem&gt;`_ to use.

   Every now and then we have to deal with functions which cannot be
   evaluated analytically. Computing the Jacobian in such cases is
   tricky. A particularly interesting case is where the inverse of the
   function is easy to compute analytically. An example of such a
   function is the Coordinate transformation between the `ECEF
   &lt;http://en.wikipedia.org/wiki/ECEF&gt;`_ and the `WGS84
   &lt;http://en.wikipedia.org/wiki/World_Geodetic_System&gt;`_ where the
   conversion from WGS84 from ECEF is analytic, but the conversion
   back to ECEF uses an iterative algorithm. So how do you compute the
   derivative of the ECEF to WGS84 transformation?

   One obvious approach would be to numerically
   differentiate the conversion function. This is not a good idea. For
   one, it will be slow, but it will also be numerically quite
   bad.

   Turns out you can use the `Inverse Function Theorem
   &lt;http://en.wikipedia.org/wiki/Inverse_function_theorem&gt;`_ in this
   case to compute the derivatives more or less analytically.

   The key result here is. If :math:`x = f^{-1}(y)`, and :math:`Df(x)`
   is the invertible Jacobian of :math:`f` at :math:`x`. Then the
   Jacobian :math:`Df^{-1}(y) = [Df(x)]^{-1}`, i.e., the Jacobian of
   the :math:`f^{-1}` is the inverse of the Jacobian of :math:`f`.

   Algorithmically this means that given :math:`y`, compute :math:`x =
   f^{-1}(y)` by whatever means you can. Evaluate the Jacobian of
   :math:`f` at :math:`x`. If the Jacobian matrix is invertible, then
   the inverse is the Jacobian of the inverse at :math:`y`.

   One can put this into practice with the following code fragment.

   .. code-block:: c++

      Eigen::Vector3d ecef; // Fill some values
      // Iterative computation.
      Eigen::Vector3d lla = ECEFToLLA(ecef);
      // Analytic derivatives
      Eigen::Matrix3d lla_to_ecef_jacobian = LLAToECEFJacobian(lla);
      bool invertible;
      Eigen::Matrix3d ecef_to_lla_jacobian;
      lla_to_ecef_jacobian.computeInverseWithCheck(ecef_to_lla_jacobian, invertible);

#. When using Quaternions, use :class:`QuaternionParameterization`.

   TBD

#. How to choose a parameter block size?

   TBD

Solving
=======

#. Choosing a linear solver.

   When using the ``TRUST_REGION`` minimizer, the choice of linear
   solver is an important decision. It affects solution quality and
   runtime. Here is a simple way to reason about it.

   1. For small (a few hundred parameters) or dense problems use
      ``DENSE_QR``.

   2. For general sparse problems (i.e., the Jacobian matrix has a
      substantial number of zeros) use
      ``SPARSE_NORMAL_CHOLESKY``. This requires that you have
      ``SuiteSparse`` or ``CXSparse`` installed.

   3. For bundle adjustment problems with up to a hundred or so
      cameras, use ``DENSE_SCHUR``.

   4. For larger bundle adjustment problems with sparse Schur
      Complement/Reduced camera matrices use ``SPARSE_SCHUR``. This
      requires that you have ``SuiteSparse`` or ``CXSparse``
      installed.

   5. For large bundle adjustment problems (a few thousand cameras or
      more) use the ``ITERATIVE_SCHUR`` solver. There are a number of
      preconditioner choices here. ``SCHUR_JACOBI`` offers an
      excellent balance of speed and accuracy. This is also the
      recommended option if you are solving medium sized problems for
      which ``DENSE_SCHUR`` is too slow but ``SuiteSparse`` is not
      available.

      If you are not satisfied with ``SCHUR_JACOBI``'s performance try
      ``CLUSTER_JACOBI`` and ``CLUSTER_TRIDIAGONAL`` in that
      order. They require that you have ``SuiteSparse``
      installed. Both of these preconditioners use a clustering
      algorithm. Use ``SINGLE_LINKAGE`` before ``CANONICAL_VIEWS``.

#. Use `Solver::Summary::FullReport` to diagnose performance problems.

   When diagnosing Ceres performance issues - runtime and convergence,
   the first place to start is by looking at the output of
   ``Solver::Summary::FullReport``. Here is an example

   .. code-block:: bash

     ./bin/bundle_adjuster --input ../data/problem-16-22106-pre.txt

     iter      cost      cost_change  |gradient|   |step|    tr_ratio  tr_radius  ls_iter  iter_time  total_time
        0  4.185660e+06    0.00e+00    2.16e+07   0.00e+00   0.00e+00  1.00e+04       0    7.50e-02    3.58e-01
        1  1.980525e+05    3.99e+06    5.34e+06   2.40e+03   9.60e-01  3.00e+04       1    1.84e-01    5.42e-01
        2  5.086543e+04    1.47e+05    2.11e+06   1.01e+03   8.22e-01  4.09e+04       1    1.53e-01    6.95e-01
        3  1.859667e+04    3.23e+04    2.87e+05   2.64e+02   9.85e-01  1.23e+05       1    1.71e-01    8.66e-01
        4  1.803857e+04    5.58e+02    2.69e+04   8.66e+01   9.93e-01  3.69e+05       1    1.61e-01    1.03e+00
        5  1.803391e+04    4.66e+00    3.11e+02   1.02e+01   1.00e+00  1.11e+06       1    1.49e-01    1.18e+00

     Ceres Solver v1.10.0 Solve Report
     ----------------------------------
                                          Original                  Reduced
     Parameter blocks                        22122                    22122
     Parameters                              66462                    66462
     Residual blocks                         83718                    83718
     Residual                               167436                   167436

     Minimizer                        TRUST_REGION

     Sparse linear algebra library    SUITE_SPARSE
     Trust region strategy     LEVENBERG_MARQUARDT

                                             Given                     Used
     Linear solver                    SPARSE_SCHUR             SPARSE_SCHUR
     Threads                                     1                        1
     Linear solver threads                       1                        1
     Linear solver ordering              AUTOMATIC                22106, 16

     Cost:
     Initial                          4.185660e+06
     Final                            1.803391e+04
     Change                           4.167626e+06

     Minimizer iterations                        5
     Successful steps                            5
     Unsuccessful steps                          0

     Time (in seconds):
     Preprocessor                            0.283

       Residual evaluation                   0.061
       Jacobian evaluation                   0.361
       Linear solver                         0.382
     Minimizer                               0.895

     Postprocessor                           0.002
     Total                                   1.220

     Termination:                   NO_CONVERGENCE (Maximum number of iterations reached.)

  Let us focus on run-time performance. The relevant lines to look at
  are

   .. code-block:: bash

     Time (in seconds):
     Preprocessor                            0.283

       Residual evaluation                   0.061
       Jacobian evaluation                   0.361
       Linear solver                         0.382
     Minimizer                               0.895

     Postprocessor                           0.002
     Total                                   1.220

  Which tell us that of the total 1.2 seconds, about .3 seconds was
  spent in the linear solver and the rest was mostly spent in
  preprocessing and jacobian evaluation.

  The preprocessing seems particularly expensive. Looking back at the
  report, we observe

   .. code-block:: bash

     Linear solver ordering              AUTOMATIC                22106, 16

  Which indicates that we are using automatic ordering for the
  ``SPARSE_SCHUR`` solver. This can be expensive at times. A straight
  forward way to deal with this is to give the ordering manually. For
  ``bundle_adjuster`` this can be done by passing the flag
  ``-ordering=user``. Doing so and looking at the timing block of the
  full report gives us

   .. code-block:: bash

     Time (in seconds):
     Preprocessor                            0.051

       Residual evaluation                   0.053
       Jacobian evaluation                   0.344
       Linear solver                         0.372
     Minimizer                               0.854

     Postprocessor                           0.002
     Total                                   0.935

  The preprocessor time has gone down by more than 5.5x!.

Further Reading
===============

For a short but informative introduction to the subject we recommend
the booklet by [Madsen]_ . For a general introduction to non-linear
optimization we recommend [NocedalWright]_. [Bjorck]_ remains the
seminal reference on least squares problems. [TrefethenBau]_ book is
our favorite text on introductory numerical linear algebra. [Triggs]_
provides a thorough coverage of the bundle adjustment problem.</textarea>
  <script>
    var rsteditor = CodeMirror.fromTextArea(document.getElementById("rstexample"), {mode: "text/x-rst"});
  </script>
</body>
</html>
marijnh commented 10 years ago

Right, the rst mode in v3 still has the braindead assertions in it. If you can't upgrade to v4 at all, you can try just pulling the rst.js file from v4, it should still work with v3, and has been improved in the meantime. (v3 is no longer supported, in any case)