Training Deep Models
In the early years, methods for training multilayer networks were not known. In their influential book, Minsky and Papert strongly argued against the prospects of neural networks because of the inability to train multilayer networks. Therefore, neural networks stayed out of favor as a general area of research till the eighties. The first significant breakthrough in this respect was proposed1 by Rumelhart et al. in the form of the backpropagation algorithm. The proposal of this algorithm rekindled an interest in neural networks. However, several computational, stability, and overfitting challenges were found in the use of this algorithm. As a result, research in the field of neural networks again fell from favor.
At the turn of the century,several advances again brought popularity to neural networks.Not all of these advances were algorithm-centric. For example, increased data availability and computational power have played the primary role in this resurrection. However, some changes to the basic backpropagation algorithm and clever methods for initialization, such as pretraining, have also helped. It has also become easier in recent years to perform the intensive experimentation required for making algorithmic adjustments due to the reduced testing cycle times (caused by improved computational hardware). Therefore, increased data, computational power, and reduced experimentation time (for algorithmic tweaking) went hand-in-hand.
A point to note is that neural network optimization is a multivariable optimization problem. These variables correspond to the weights of the connections in various layers. Multivariable optimization problems often face stability challenges because one must perform the steps along each direction in the “right” proportion. This turns out to be particularly hard in the neural network domain, and the effect of a gradient-descent step might be somewhat unpredictable.One issue is that a gradient only provides a rate of change over an infinitesimal horizon in each direction, whereas an actual step has a finite length. One needs to choose steps of reasonable size in order to make any real progress in optimization. The problem is that the gradients do change over a step of finite length, and in some cases they change drastically.The complex optimization surfaces presented by neural network optimization are particularly treacherous in this respect, and the problem is exacerbated with poorly chosen settings (such as the initialization point or the normalization of the input features). As a result, the (easily computable) steepest-descent direction is often not the best direction to use for retaining the ability to use large steps. Small step sizes lead to slow progress, whereas the optimization surface might change in unpredictable ways with the use of large step sizes. All these issues make neural network optimization more difficult than would seem at first sight.However, many of these problems can be avoided by carefully tailoring the gradient-descent steps to be more robust to the nature of the optimization surface.
A neural network is a computational graph, in which a unit of computation is the neuron.Neural networks are fundamentally more powerful than their building blocks because the parameters of these models are learned jointly to create a highly optimized composition function of these models. Furthermore, the nonlinear activations between the different layers add to the expressive power of the network.A multilayer network evaluates compositions of functions computed at individual nodes.A path of length 2 in the neural network in which the function $f(·)$ follows $g(·)$ can be considered a composition function $f(g(·))$. Just to provide an idea, let us look at a trivial computational graph with two nodes, in which the sigmoid function is applied at each node to the input weight $w$. In such a case, the computed function appears as follows:
$f(g(w))=\frac{1}{1+exp\left(-\frac{1}{1+exp(-w)}\right)}$
We can already see how awkward it would be to compute the derivative of this function with respect to $w$. Furthermore, consider the case in which $g1(·), g2(·) . . . gk(·)$ are the functions computed in layer $m$, and they feed into a particular layer-$(m+1)$ node that computes $f(·)$.In such a case, the composition function computed by the layer-$(m+1)$ node in terms of the layer-$m$ inputs is $f(g1(·), . . . gk(·))$. As we can see, this is a multivariate composition function,which looks rather ugly. Since the loss function uses the output(s) as its argument(s), it may typically be expressed a recursively nested function in terms of the weights in earlier layers.For a neural network with 10 layers and only 2 nodes in each layer, a recursively nested function of depth 10 will result in a summation of 210 recursively nested terms, which appear forbidding from the perspective of computing partial derivatives. Therefore, we need some kind of iterative approach to compute these derivatives. The resulting iterative approach is dynamic programming, and the corresponding update is really the chain rule of differential calculus.
In order to understand how the chain rule works in a computational graph, we will discuss the two basic variants of the rule that one needs to keep in mind. The simplest version of the chain rule works for a straightforward composition of the functions:
Using dynamic programming to efficiently aggregate the product of local gradients along the exponentially many paths in a computational graph results in a dynamic programming update that is identical to the multivariable chain rule of differential calculus.
Learn Back propagation from here
Comments
Post a Comment