next up previous contents
Next: Bibliography Up: Bundle adjustment Previous: Levenberg-Marquardt iteration   Contents

Bundle adjustment

The observed points ${\tt m}_{ki}$ being fixed, a specific residual $r_{ki}=D({\tt m}_{ki},{\bf\hat{P}}_k {\tt\hat{M}}_i)^2$ is only dependent on the point $i$-th point and the $k$-th projection matrix. This results in a very sparse matrix for the Jacobian. This is illustrated in figure A.1 for 3 views and 4 points.

Figure A.1: Sparse structure of Jacobian for bundle adjustment.
\psfig{figure=sam/, width=8cm}}\end{figure}
Because of the block structure of the Jacobian solving the normal equations ${\bf J}^\top{\bf Jx=b}$ have a structure as seen in figure A.2.
Figure A.2: Block structure of normal equations.
\psfig{figure=sam/, width=10cm}}\end{figure}
It is possible to write down explicit formulas for each block. Let us first introduce the following notation:
$\displaystyle {\bf U}_k$ $\textstyle =$ $\displaystyle \sum_i (\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\bf P}_k})^\top \frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\bf P}_k}$ (A5)
$\displaystyle {\bf V}_i$ $\textstyle =$ $\displaystyle \sum_k (\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\tt M}_i})^\top \frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\tt M}_i}$ (A6)
$\displaystyle {\bf W}_{ki}$ $\textstyle =$ $\displaystyle (\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\bf P}_k})^\top \frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\tt M}_i}$ (A7)
$\displaystyle \epsilon({\bf P}_k)$ $\textstyle =$ $\displaystyle \sum_i (\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\bf P}_k})^\top \epsilon_{ki}$ (A8)
$\displaystyle \epsilon({\tt M}_i)$ $\textstyle =$ $\displaystyle \sum_i (\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\tt M}_i})^\top \epsilon_{ki}$ (A9)

with $\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\bf P}_k}$ and $\frac{\partial \hat{\tt m}_{ki}}{\partial \hat{\tt M}_i}$ matrices containing the partial derivatives from the coordinates of $\hat{\tt m}_{ki}$ to the parameters of $\hat{\bf P}_k$ and $\hat{\tt M}_i$ respectively. In this case the normal equations can be rewritten as
\left[\begin{array}{cc} {\bf U} & {\bf W} \\
{\bf W}^\top &...
...{c} \epsilon({\bf P}) \\ \epsilon({\tt M}) \end{array} \right]
\end{displaymath} (A10)

where the matrices ${\bf U,V,W}, \Delta({\bf P}), \Delta({\tt M}), \epsilon({\bf P})$ and $\epsilon({\tt M})$ are composed of the blocks defined previously. Assuming ${\bf V}$ is invertible both sides of equation A.10 multiplied on the left with

\left[ \begin{array}{cc} {\bf I} & {\bf -WV}^{-1} \\ {\bf0}& {\bf I}

to obtain
\left[ \begin{array}{cc} {\bf U-WV}^{-1}{\bf W}^\top & {\bf0...
...^{-1}\epsilon({\tt M})
\\ \epsilon({\tt M}) \end{array}\right]
\end{displaymath} (A11)

This can be separated in two groups of equations. The first one is
\left( {\bf U-WV}^{-1}{\bf W}^\top \right) \Delta({\bf P}) =
\epsilon({\bf P}) - {\bf WV}^{-1}\epsilon({\tt M})
\end{displaymath} (A12)

and can be used to solve for $\Delta({\bf P})$. The solution can be substituted in the other group of equations:
\Delta({\tt M})= {\bf V}^{-1}\left( \epsilon({\bf M}) - {\bf W}^\top \Delta({\bf P})\right)
\end{displaymath} (A13)

Note that due to the sparse block structure of ${\bf V}$ its inverse can be computed very efficiently.

The only computationally expensive step consist of solving equation A.12. This is however much smaller than the original problem. For 20 views and 2000 points the problem is reduced from solving 6000 unknowns concurrently to more or less 200 unknowns. To simplify the notations the normal equations were used in this presentation. It is however simple to extend this to the augmented normal equations.

next up previous contents
Next: Bibliography Up: Bundle adjustment Previous: Levenberg-Marquardt iteration   Contents
Marc Pollefeys 2002-11-22