[*** UNDER CONSTRUCTION ***]

Non-geek note: Sorry, but this page requires a higher math tolerance than most of this paper. Sorry, but it is just a fact of life that you need to know about matrices to even read this page. I also assume you've read the rest of the paper.

There are any number of programs around that can solve a linear least squares problem efficiently. Some of them can handle problems bigger than memory, and can do wild and wonderful things that you'd never think of if you wrote your own.

It would be a shame then if everybody wanting to do cave survey had to write their own. Well, we are in luck. All the fancyness and wonderful writing you've been reading on how to do it can just directly make use of the packages around with a little preparation.

Problem types:

Matrix partitioning allows you to turn the one dimensional least squares problem into the three dimensional one without problem.

The method is probably best shown by example than by full formal derivation.

Assume we have a survey where you've set up the normal equations as:

[ N11 N12 N13 ] [ P1 ] = [ F1 ] [ N21 N22 N23 ] [ P2 ] [ F2 ] [ N31 N32 N33 ] [ P3 ] [ F3 ]

Where N11, N12, etc. are the (3x3) submatrices of the problem.

Of course N11, for example, could be a very ugly expression but when it is evaluated it just becomes a 3x3 result. (We are only concerned with form here instead of direct content.)

Let us assume that the elements of the submatrix N11 are

[ N11xx N11yx N11zx ] [ N11xy N11yy N11zy ] [ N11xz N11yz N11zz ]

If a given 3x3 submatrix NIJ is zero, than it is the zero matrix, and if it has the value 1 is one, then it is the identity matrix.

The normal equations we set up can (by matrix partitioning) be looked at as a matrix with each element being a 3x3 matrix. Or you can write the matrix out:

[ N11xx N11yx N11zx N12xx N12yx N12zx N13xx N13yx N13zx ] [ N11xy N11yy N11zy N12xy N12yy N12zy N13xy N13yy N13zy ] [ N11xz N11yz N11zz N12xz N12yz N12zz N13xz N13yz N13zz ] [ ] [ N21xx N21yx N11zx N22xx N22yx N22zx N23xx N23yx N23zx ] [ N21xy N21yy N21zy N22xy N22yy N22zy N23xy N23yy N23zy ] [ N21xz N21yz N21zz N22xz N22yz N22zz N23xz N23yz N23zz ] [ ] [ N31xx N31yx N31zx N32xx N32yx N32zx N33xx N33yx N33zx ] [ N31xy N31yy N31zy N32xy N32yy N32zy N33xy N33yy N33zy ] [ N31xz N31yz N31zz N32xz N32yz N32zz N33xz N33yz N33zz ]

The spacing is there just so that the original boundaries can be seen. It is just a nine by nine matrix. The formulas in this paper were carefully derived so that any of the matrix formulas that have weights, variances, coordinates, etc. can be expanded out in this manner.

Coordinates are column vectors, so that the F matrix would be written:

[ F1x ] [ F1y ] [ F1z ] [ ] [ F2x ] [ F2y ] [ F2z ] [ ] [ F3x ] [ F3y ] [ F3z ]

(Once again, the space is for readability, it is just a column of nine numbers)

Many programs available allow you to set up a problem **Ad=f** but
have no provision for weights. You are still in luck.

Weight matrices W are symmetric and positive definite. Therefore they can be factored into matrices G' and G such that G'G = W (This is usually referred to as a "Choleski" factorization. If you premultiply the problem Ad=f by G you get GAd = Gf. (The new A is GA and the new f is Gf) This problem solved by the least squares program is now (GA)'GAd = (GA)'Gf which reduces to A'G'GAd = A'G'Gfi (and since G'G is W) this is identical to A'WAd = A'Wf, which is the exact weighted problem we wanted to solve.

Before one runs off and implements a full blown Choleski factorization routine you should notice that the weight matrix is made of little 3x3 weight matrices down the diagonal. (Properly called a 3 wide block diagonal matrix) Because of this the Choleski factorization of the weight matrix is also going to be a block diagonal matrix with each little 3 by 3 weight matrix replaced by the Choleski factorization of the corresponding little 3 by 3 weight matrix.

And before hand the premultiply to a full matrix multiply, you should note that the G matrix is block diagonal so multiplication by it can be done very simply. Almost any book on matrix methods will show how to multiply by diagonal and block diagonal matrices very fast and efficiently.

Canned packages generally are handed the problem matrix and the observation matrix. They generally assume that this is the linear problem given. If the documentation doesn't talk about non-linear problems, then it will be treating the problem as linear. Dealing with non-linearity in this case is exactly the same as dealing with non-linearity in general. See the section on dealing with the survey problem non-linearity.

This page is http://www.cc.utah.edu/~nahaj/cave/survey/intro/usecanned.html © Copyright 2000 by John Halleck, All Rights Reserved. This snapshot was last modified on January 24th, 2001