next up previous contents
Next: Levenberg-Marquardt iteration Up: Levenberg-Marquardt minimization Previous: Levenberg-Marquardt minimization   Contents

Newton iteration

Newton's approach starts from an initial value ${\tt x}_0$ and refines this value using the assumption that $f$ is locally linear. A first order approximation of $f({\tt x}_0+\Delta)$ yields:
\begin{displaymath}
f({\tt x}_0+\Delta) = f({\tt x}_0) + {\bf J}\Delta
\end{displaymath} (A2)

with ${\bf J}$ the Jacobian matrix and $\Delta$ a small displacement. Under these assumptions minimizing ${\tt\hat{e}} = {\tt\hat{e}}_0 - {\bf J}\Delta$ can be solved through linear least-squares. A simple derivation yields
\begin{displaymath}
{\bf J}^\top{\bf J}\Delta = {\bf J}^\top {\tt\hat{e}} \enspace .
\end{displaymath} (A3)

This equation is called the normal equation. The solution to the problem is found by starting from an initial solution and refining it based on successive iterations
\begin{displaymath}
{\tt x}_{i+1} = {\tt x}_i + \Delta_i
\end{displaymath} (A4)

with $\Delta_i$ the solution of the normal equation A.3 evaluated at ${\tt x}_i$. One hopes that this algorithm will converge to the desired solution, but it could also end up in a local minimum or not converge at all. This depends a lot on the initial value ${\tt x}_0$.


next up previous contents
Next: Levenberg-Marquardt iteration Up: Levenberg-Marquardt minimization Previous: Levenberg-Marquardt minimization   Contents
Marc Pollefeys 2002-11-22