Least Squares Solutions by Orthonalization:



Orthogonalization Methods

*** Note, this is not the surveying article on the topic, that is located at http://www.cc.utah.edu/~nahaj/survey/. This is just the mention in the caving stuff.

Overview

[*** GROSSLY UNDER CONSTRUCTION ***]

Geek Note: This page is probably ONLY of interest to serious mathematics folk. People looking for an intuitive understanding are probably better off with the Normal Equations presentation.

The Least Squares problem (Ap=s, with weights), while usually presented (and solved) by surveyors by the method of Normal Equations, is normally solved by Numerical Analysists and other Mathematitions by Orthogonalization methods.

The usual arguments against Normal Equations is that the condition number of the normal equation matrix is the square of the condition number of the original problem. This is not normally much of an issue in practice, since the matrix in the survey case is not only positive definite, but is very close to being diagonally dominate. There are, however, other reasons that one might prefer orthogonalization methods such as the lower overhead of such processing.

If you are a mathematics sort, but somehow haven't heard of orthogonalization methods, I'll review and say that an orthoginal transformation Q is a transformation with the property that inverse(Q) is equal to transpose(Q). The reason that they are of special interest in the least squares problem is that they don't change the 2-norm of a matrix, so any collection of orthogonal transformations preserves the 2-norm of the original problem.

Weighting the Problem

We will assume for this discussion that we are handed the problem matrix "A", the data (the shots "S") and a weight matrix "W". Before we can use orthogonalization methods on the problem we have to weight the problem correctly. This is done straight forwardedly for Normal Equation methods, but is a little more complex here.

The correct weighting of the problem Ap = s with a given weight W, is to multiply both sides by the square root of the weight matrix. There are several definitions of square root that would work for this task. Any non-singular matrix U such that transpose(U)*U=W does the job. Since the weight matrix is positve definate we can be assured that U exists.

Once the problem as been weighted (UAp=Us) we can proceed with the problem without worrying further about weights, since we now have a problem that has the same solution set.

QR"s R is the Cholesky R

If we were to factor A in AX = L, we would get the problem QRX = L. If we were to solve that system via normal equations we would get:

   transpose(QR)QRX = Transpose(QR)L
     which reduces to
   transpose(R)transpose(Q)QRX = L
     But transpose(Q) = inverse(Q)
     so  transpose(Q)Q = the identity and the problem reduces to:
   transpose(R)RX = L

Since R is already upper triangular, it is (up to the signs of the rows) already in the form that a Cholesky factorization would reduce it to. Since that factor is unique, the R from orthogonalization must be the R from QR. The signs of the rows can be adjusted after the fact, or they can be arranged to be correct by appropriate constraints on the spanning tree of the problem.

Solving the problem

Conceptually, all orthoginalization methods are QR factorizations of the (weighted) problem. There are MANY different orthogonal transformations that could be used, but they have the nice property that they ALL produce the same result.

One of the things that makes Orthogonalization methods a very good choice for solving the survey problem is that the survey can always be arranged (by an appropriate and near trival relabeling of shots and points) into an upper triangular matrix, plus a row for each redundancy of the survey. Since the matrix is then almost in the form that R needs to be in the QR factorizations, the amount of work needed to complete the task is low. Note that there is a linear time algorithm to do the necessary rearangement.

The few rows describing the redundancy are VERY sparce (no more than two elements per row), so transformations that selectively zero elements (Such as Householder reflections or Given's rotations) are an obvious choice.

The most important reason that a numberical analysist would recomend orthogonalization methods is that the problem (as rearranged above) has a much lower condition number than the Normal Equations form. In fact, the Normal Equation methods square the condition number of the matrix by their setup. (The condition number of the matrix is low (but greater than one) so this doesn't usually cause too many problems when done by normal equations.)

...[*** GROSSLY UNDER CONSTRUCTION ***]


Go to ...


This page is http://www.cc.utah.edu/~nahaj/cave/survey/intro/orthogonalization.html
© Copyright 2001 by John Halleck, All Rights Reserved.
This snapshot was last modified on September 19th, 2001