John Halleck's Least Squares Network Adjustment:


[*** GROSSLY UNDER CONSTRUCTION ***]

[*** This page currently has severe notation problems. ***] Inverse of A is sometimes A' and sometimes \A. (Yech... multiple sources of text.)

Introduction to Least Squares for Cave Surveying

"Least Squares Adjustment of Surveys" seems to be a magic phrase that your average cave surveyor assumes means "Mathematics beyond the understanding of mortal man".

There is no need for this view... While some fairly hefty mathematics goes into the derivation of the technique, the final result is quite easy to understand. In a practical sense the "methods of least squares" are just a way to set up a collection of weighted averages.

In this chapter I will explain some basic facts about Least Squares, and then walk through some simple examples of the common methods, and give some comments on "bottom up" methods. I will show why these methods all produce the same results, and show the geometric meanings of the results in terms of weighted averages.

I hasten to add that this is NOT the ONLY way to look at the geometric meaning of the results. There are many different geometric interpretations of Least Squares methods. I have mearly chosen this interpretation because it is a relatively simple one and is in a form that I personally believe has intuitive appeal.

Surveyor Geek note: The "cave survey problem" is actually the adjustment of a traverse net without triangulations or trilateralizations. As such, it has a simplier form than a full survey might. Specificly the geometry matrix only has two non-zero items per row, and they can only be one or negative one. This can lead to great simplifications.

Information on this page

What are Least Squares Adjustments?

In a Cave Survey (or any real survey for that matter) one ends up with more than one way to get from one point in a survey to another. Unfortunately, the actual measurements in the survey give different locations for a point depending on which route you take. One would really like just one location for each point.

Least squares methods are statistical methods that can give the "statistically most likely" location of the multiply defined points. (Subject to the weights that the given covariances produce.)

Overview

We are (in this chapter) talking about solving a linear (or linearized) least squares problem that can be represented as a matrix.

There are two very important parts to any least squares adjustment. One is setting it up a least squares system to solve, the other is solving it. This distinction is not as trivial as it sounds.

There are many different ways of setting up a least squares problem. The most common are called "Adjustment of Observations" and "Adjustment by Conditions". Assuming that one has done weighting correctly, these both produce the same answers (except for round off errors.) Some of these produce large, but mostly empty, problem matrices. Such matrices are called "sparse" matrices. Some of these produce small, but mostly full, problem matrices. A non-sparse matrix is called "full" or "dense".

Once a problem is set up, there are many ways to solve it. Except for round off errors, these methods produce the same results. Some of them are more appropriate for problem setups that are large and sparse, some are better at small dense systems. Some of them don't even require a full setup, since they can form the answer from the information needed to setup the problem, without ever actually building the problem matrix.

Adjustments

We will start out explaining Adjustment by Observations, because it is simpler to explain, and doesn't require much preprocessing at all. As soon as we have worked up to surveys with loops, we will start to compare the two methods.

When we survey a cave we have a collection of shots that connect various actual locations in a cave.

Non-Geek note: Sorry, but this chapter seriously needs you to be somewhat familiar with matrices. I just don't know any way to do this that doesn't have that requirement. [There should be a refresher chapter to go here - JH]

In your average cave survey you measure the difference between the location one survey point in the cave and another. This can be done by measuring Distance, Azimuth, and Inclination. This could have been done in some very bizarre and unusual cases by using something like a GPS (Global Positioning System) unit to measure the points. And divers would be incorporating depth measurements.

In all of these cases, through various techniques, one ends up with an X,Y,Z difference between the points, and some statistical information on the uncertainties of shot. From the statistical properties one can compute a (three dimensional) weight for that same shot.

Geek note: By matrix partitioning, one can look at most matrix operations as being direct operations on a big matrix, or one can partition the matrix into conforming pieces and take it as a matrix of smaller conforming pieces. So, for example, a 12 by 12 matrix can be looked at as a 12 by 12, or it can be taken as a 4 by 4 matrix, that has elements that are little 3 by 3 matrices. I personally think that this can be used to make explanations easier, since one can explain a basicly one dimensional case, and then jump to three dimensions while keeping the same notation. I've done this without comment in the Weighted Averages chapter and in the Proportional distributions chapters, but feel a comment is in order here.

There are several ways of looking at the problem and setting it up.

Adjustment of Original Observations

Collapsing Traverses

[... This section goes somewhere else ... ]

In the previous examples, we saw how a traverse that has no non-trivial junctions behaved in the least squares solution as if it were a single shot with appropriate weighting.

If you take the view that the least squares adjustment is just a weighted average of all of the (unique, non-trivial, non-redundant) paths between two points, then it is easy to justify this.

Any path is going to either include all of the traverse, or it is going to include none of it. The data value for the traverse is just going to be the sum of data values [Be careful of signs if the there are shots in the traverse going in different directions]. The (co)variance of the combined traverse is just going to be the sums of the covariances of the shots that make it up. (If you really want to view things in terms of weights instead, then the weight of the traverse is going to be the inverse of the sum of the inverses of the weights of the traverse.)

If a collection of shots ties two points together, then the collection can always be replaced by a virtual shot of appropriate value and weight.

Distributing weights

If we tie down BOTH ends of a simple traverse we get an interesting result.

[... Example, showing shot oriented view ...]

Doesn't that look a lot like the chapter on distributing by weights?

[...]

The bottom line here is if a traverse has its end points tied down, the least squares final solution is going to be the same as a weighted distribution, if none of the internal points of the traverse are junction points.

Tradeoffs

[... This is now the wrong place for this discussion ...]

It is hard to get a feel for the tradeoff's in which method to use, unless one has some actual numbers. The numbers depend on the caves themselves, but we can come up with some idea by comparing the numbers from an actual cave.

I will use the rough numbers for Little Brush Creek Cave in Utah, as they were some years ago. The cave had about 2500 survey shots, but there were only about 20 or so loops in the survey.

A least squares Adjustment by Observations is the most popular least squares method in cave survey programs. Such an adjustment would end up solving a matrix problem with a matrix about 2500 by 2500 with roughly 62500 elements in it. However the rows of the (normal equation) matrix averaged having only a little more than THREE non-zero elements in each row. The problem solved by orthogonaliztion methods (Such as Given's Rotations) would have very low overhead.

A least squares Adjustment by Conditions for the same data would form a problem matrix only about 20 by 20. That matrix would have been mostly filled in.

Most people have an intuition that a setup that only deals with a 20 x 20 problem matrix is better than one that has to deal with a 2500 x 2500 problem matrix. Whether or not the larger (almost empty) matrix is "easier" to solve depends on which tools one has around.

Another factor that comes into play is that the techniques that produce the the smallest problem matrix, need to know the most about the geometry of the survey. Adjustment by conditions needs to know what shots form loops. Adjustment by observations only needs to know what surveys are "floating".


Go to ...


This page is http://www.cc.utah.edu/~nahaj/cave/survey/intro/leastsquares.html
© Copyright 2000 by John Halleck, All Rights Reserved.
This snapshot was last modified on January 24th, 2001