Doctoral Degrees (Mathematical Sciences)
Permanent URI for this collection
Browse
Browsing Doctoral Degrees (Mathematical Sciences) by Author "Engelbrecht, Andries Petrus"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ItemSensitivity analysis of multilayer neural networks(Stellenbosch : Stellenbosch University, 1999-12) Engelbrecht, Andries Petrus; Cloete, I.; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences.ENGLISH ABSTRACT: The application of artificial neural networks to solve classification and function approximation problems is no longer an art. Using a neural network does not simply imply the presentation of a data set to the network and relying on the so-called "black-box" to produce - hopefully accurate - results. Rigorous mathematical analysis now provides a much better understanding of what is going. on inside the "black-box". The knowledge gained from these mathematical studies allows the development of specialized tools to increase performance, robustness and efficiency. This thesis proposes that sensitivity analysis of the neural network output function be used to learn more about the inner working of multilayer feedforward neural networks. New sensitivity analysis techniques are developed to probe the knowledge embedded in the weights of networks, and to use this knowledge within specialized sensitivity analysis algorithms to improve generalization performance, to reduce learning and model complexity, and to improve convergence performance. A general mathematical model is developed which uses first order derivatives of the neural network output function with respect to the network parameters to quantify the effect small perturbations to these network parameters have on the output of the network. This sensitivity analysis model is then used to develop techniques to locate and visualize decision boundaries, and to determine which boundaries are implemented by which hidden units. The decision boundary detection algorithm is then used to develop an active learning algorithm for classification problems which trains only on patterns close to decision boundaries. Patterns that convey little information about the position of boundaries are therefore not used for training. An incremental learning algorithm for function approximation problems is also developed to incrementally grow the training set from a candidate set by adding to the training set those patterns that convey the most information about the function to be approximated. The sensitivity of the network output to small perturbations of the input pattern is used as measure of pattern informativeness. Sensitivity analysis is also used to develop a network pruning algo-rithm to remove irrelevant network parameters. The significance of a parameter is quantified as the influence small perturbations on that parameter have on the network output. Variance analysis is employed as pruning heuristic to decide if a parameter should be removed or not. Elaborate experimental evidence is provided to illustrate how each one of the developed sensitivity analysis techniques addresses the objectives of improved performance, robustness and efficiency. These results show that the different models successfully utilize the neural network learner's current knowledge to obtain optimal architectures and to make optimal use of the available training data.