Doctoral Degrees (Computer Science)
Permanent URI for this collection
Browse
Browsing Doctoral Degrees (Computer Science) by Subject "Artificial intelligtence"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
- ItemOn noise regularised neural networks: initialisation, learning and inference(Stellenbosch : Stellenbosch University, 2019-12) Pretorius, Arnu; Kroon, R. S. (Steve); Kamper, M. J.; Stellenbosch University. Faculty of Science. Dept. of Mathematical Sciences. Division Computer Science.ENGLISH ABSTRACT: Innovation in regularisation techniques for deep neural networks has been a key factor in the rising success of deep learning. However, there is often limited guidance from theory in the development of these techniques and our understanding of the functioning of various successful regularisation techniques remains impoverished. In this work, we seek to contribute to an improved understanding of regularisation in deep learning. We specifically focus on a particular approach to regularisation that injects noise into a neural network. An example of such a technique which is often used is dropout (Srivastava et al., 2014). Our contributions in noise regularisation span three key areas of modeling: (1) learning, (2) initialisation and (3) inference. We first analyse the learning dynamics of a simple class of shallow noise regularised neural networks called denoising autoencoders (DAEs) (Vincent et al., 2008), to gain an improved understanding of how noise affects the learning process. In this first part, we observe a dependence o f learning behaviour on initialisation, which leads us to study how noise interacts with the initialisation of a deep neural network in terms of signal propagation dynamics during the forward and backward pass. Finally, we consider how noise affects inference in a Bayesian context. We mainly focus on fully-connected feedforward neural networks with rectifier linear unit (ReLU) activation functions throughout this study. To analyse the learning dynamics of DAEs, we derive closed form solutions to a system of decoupled differential equations that describe the change in scalar weights during the course of training as they approach the eigenvalues of the input covariance matrix (under a convenient change of basis). In terms of initialisation, we use mean field theory to approximate the distribution of the pre-activations of individual neurons, and use this to derive recursive equations that characterise the signal propagation behaviour of the noise regularised network during the first forward and backward pass o f training. Using these equations, we derive new initialisation schemes for noise regularised neural networks that ensure stable signal propagation. Since this analysis is only valid at initialisation, we next conduct a large-scale controlled experiment, training thousands of networks under a theoretically guided experimental design, for further testing the effects of initialisation on training speed and generalisation. To shed light on the influence of noise on inference, we develop a connection between randomly initialised deep noise regularised neural networks and Gaussian processes (GPs)—non-parametric models that perform exact Bayesian inference—and establish new connections between a particular initialisation of such a network and the behaviour of its corresponding GP. Our work ends with an application of signal propagation theory to approximate Bayesian inference in deep learning where we develop a new technique that uses self-stabilising priors for training deep Bayesian neural networks (BNNs). Our core findings are as follows: noise regularisation helps a model to focus on the more prominent statistical regularities in the training data distribution during learning which should be useful for later generalisation. However, if the network is deep and not properly initialised, noise can push network signal propagation dynamics into regimes of poor stability. We correct this behaviour with proper “noise-aware” weight initialisation. Despite this, noise also limits the depth to which networks are able to train successfully, and networks that do not exceed this depth limit demonstrate a surprising insensitivity to initialisation with regards to training speed and generalisation. In terms of inference, noisy neural network GPs perform best when their kernel parameters correspond to the new initialisation derived for noise regularised networks, and increasing the amount of injected noise leads to more constrained (simple) models with larger uncertainty (away from the training data). Lastly, we find our new technique that uses self-stabilising priors makes training deep BNNs more robust and leads to improved performance when compared to other state-of-the-art approaches.