Since a maximum likelihood classifier (i.maxlik) already exists in GRASS, many of the utilities of i.maxlik in selecting and analyzing training data were used in the neural network tool. Among these utilities is the ability to visualize and, if necessary, change histograms from each training site. The program for the neural network tool was structured in such as a way that training classes selected in the neural network tool could be used in the maximum likelihood classifier. In GRASS, the maximum likelihood classifier assumes a Gaussian distribution for the training data.
Figure 1 shows the initial screen of the tool once the user has entered the name of the output map layer, the number of output classes, and the
Once the user is satisfied with all the training sites that he has selected, all input map layers are sampled for their data. At intersections of training areas with input map layers, training data for the neural network are gathered. Training data are stored as an ascii file so that the user may examine and change it, if necessary. Input data to the network is obtained cell-wise from all areas of the input maps. The classes option of the neural network tool lets a user examine the distribution of data when two input map layers are used. For higher input dimensions, it will be necessary to link the tool to a more sophisticated program such as xgobi (Buta et al., 1986). The user may eliminate outliers, and data conflicts by drawing rectangular boxes around any data points he wishes to eliminate. If necessary a whitening and diagonalization operation can be done on the data so that better class separability is achieved.
Once the user is satisfied with the class distributions, he selects the "configure" option. Here the user selects a quickpropagation network (Fahlman, 1991), or the traditional backpropagation network (Baffes, 1990). The quickpropagation network uses gradient descent to adjust weights and assumes a parabolic shape for global minimum. Iterations of the network are performed by the number of training cycles set by the user. Backpropagation uses gradient descent and converges to a root mean square error value set by the user. In the neural network tool, performance of the network as training progresses is shown on the left half of the GRASS screen. Once training of the neural network is complete, the user propagates cell values of the input map layers through the network. The new map layer generated by the neural network can then be querried for. Upon completion of network training, the user may save the neural network structure such as the number of input, hidden, and output units, and the network weights.
The "linear" option lets a user classify input map layers based on a nearest means and Bayesian classifiers. The nearest means classifier calculates the mean vector of each training class, and classifies input vectors according to distance from mean vectors. In a Bayesian classifier, input data is classified such as to minimize the overlap error between training classes.
To illustrate the uses of the GRASS neural network tool that we have developed, we will consider a simple two class problem of classifying AVHRR into urban and non-urban areas using a TM composite to select training areas. This exercise was also done with an eye to use TM composites (using whole TM scenes) in different parts of the country as training sites for classification of AVHRR into landuse classes.
Figure x shows a black and white image of the class data distributions for urban and non-urban areas (The classes will be evident in usage of the tool since classes are drawn using different colors). Figure x shows the error at the output units of the neural network on each training cycle. Figure x shows the classified AVHRR image, with the darker areas representing urban areas, and the lighter areas representing non-urban areas.