Ranjan S. Muttiah;Bruce W. Byars
Blackland Res. Ctr.;Geology Department
Texas A&M Univ.;Baylor Univ.
808 E.Blackland Rd.;Waco, TX 77777
Temple, TX 76502;
muttiah@iiml.tamu.edu;macgyver@earthlogic.baylor.edu

INTRODUCTION

Neural networks have found many interesting uses in the remote sensing community because it allows integration of remote sensing and other complementary landuse information in image classification (Zhuang et al., 1992). In this paper, we describe a neural network tool developed in GRASS (U.S. Army Corps, 1993) to aid in classification of GIS data. We apply the tool in an example classification of an AVHRR image of Temple, Texas into urban and non-urban areas. Temple is located in the Blackland prairies of Central Texas.

GRASS Neural Network Tool

A neural network tool was written for GRASS GIS using the C programming language (Kernighan and Ritchie, 1984) to facilitate the use of neural networks and linear classifiers in supervised classification of raster cell files. Neural networks are made of simple non-linear computational units called neurons that are linked together and work cooperatively to solve complex mapping problems. The reader wishing to know in depth about neural networks is recommended the book by Hertz et al. (1990). In a GIS framework each input unit to the neural network would be assigned a raster map layer, and training data for the network would be collected on a cell by cell basis. Typically, a single map layer would be used for selecting training sites, although this requirement can be relaxed so that output units can be assigned to more than one map layer, and the map layer used in selecting training sites needn't be used as output to the neural network.

Since a maximum likelihood classifier (i.maxlik) already exists in GRASS, many of the utilities of i.maxlik in selecting and analyzing training data were used in the neural network tool. Among these utilities is the ability to visualize and, if necessary, change histograms from each training site. The program for the neural network tool was structured in such as a way that training classes selected in the neural network tool could be used in the maximum likelihood classifier. In GRASS, the maximum likelihood classifier assumes a Gaussian distribution for the training data.

Figure 1 shows the initial screen of the tool once the user has entered the name of the output map layer, the number of output classes, and the

Once the user is satisfied with all the training sites that he has selected, all input map layers are sampled for their data. At intersections of training areas with input map layers, training data for the neural network are gathered. Training data are stored as an ascii file so that the user may examine and change it, if necessary. Input data to the network is obtained cell-wise from all areas of the input maps. The classes option of the neural network tool lets a user examine the distribution of data when two input map layers are used. For higher input dimensions, it will be necessary to link the tool to a more sophisticated program such as xgobi (Buta et al., 1986). The user may eliminate outliers, and data conflicts by drawing rectangular boxes around any data points he wishes to eliminate. If necessary a whitening and diagonalization operation can be done on the data so that better class separability is achieved.

Once the user is satisfied with the class distributions, he selects the "configure" option. Here the user selects a quickpropagation network (Fahlman, 1991), or the traditional backpropagation network (Baffes, 1990). The quickpropagation network uses gradient descent to adjust weights and assumes a parabolic shape for global minimum. Iterations of the network are performed by the number of training cycles set by the user. Backpropagation uses gradient descent and converges to a root mean square error value set by the user. In the neural network tool, performance of the network as training progresses is shown on the left half of the GRASS screen. Once training of the neural network is complete, the user propagates cell values of the input map layers through the network. The new map layer generated by the neural network can then be querried for. Upon completion of network training, the user may save the neural network structure such as the number of input, hidden, and output units, and the network weights.

The "linear" option lets a user classify input map layers based on a nearest means and Bayesian classifiers. The nearest means classifier calculates the mean vector of each training class, and classifies input vectors according to distance from mean vectors. In a Bayesian classifier, input data is classified such as to minimize the overlap error between training classes.

To illustrate the uses of the GRASS neural network tool that we have developed, we will consider a simple two class problem of classifying AVHRR into urban and non-urban areas using a TM composite to select training areas. This exercise was also done with an eye to use TM composites (using whole TM scenes) in different parts of the country as training sites for classification of AVHRR into landuse classes.

EXAMPLE APPLICATION

A Thematic Mapper (TM) composite for Temple, Texas using the second, third, and seventh channels was used to identify land use categories using ERDAS landuse to those predicted from the TM composite, except for the water body in the northwest corner of the image which had a smaller coverage than that of the TM image.

Figure x shows a black and white image of the class data distributions for urban and non-urban areas (The classes will be evident in usage of the tool since classes are drawn using different colors). Figure x shows the error at the output units of the neural network on each training cycle. Figure x shows the classified AVHRR image, with the darker areas representing urban areas, and the lighter areas representing non-urban areas.

ACKNOWLEDGEMENT

We thank Jerry Ledyard (SCS, Temple, Texas) for help in the GPS survey.

REFERENCES