Posts

Showing posts from January, 2022

Fully Connected Layers

Each feature in the final spatial layer is connected to each hidden state in the first fully connected layer. This layer functions in exactly the same way as a traditional feed-forward network. In most cases, one might use more than one fully connected layer to increase the power of the computations towards the end. The connections among these layers are exactly structured like a traditional feed-forward network. Since the fully connected layers are densely connected, the vast majority of parameters lie in the fully connected layers. For example, if each of two fully connected layers has 4096 hidden units, then the connections between them have more than 16 million weights. Similarly, the connections from the last spatial layer to the first fully connected layer will have a large number of parameters.Even though the convolutional layers have a larger number of activations (and a larger memory footprint), the fully connected layers often have a larger number of connections (and paramete

Local Response Normalization

A trick that is introduced in  CNN is local response normalization, which is always used immediately after the ReLU layer. The use of this trick aids generalization. The basic idea of this normalization approach is inspired from biological principles, and it is intended to create competition among different filters. First, we describe the normalization formula using all filters, and then we describe how it is actually computed using only a subset of filters. Consider a situation in which a layer contains $N$ filters, and the activation values of these $N$ filters at a particular spatial position $(x, y)$ are given by $a_1 . . . a_N$. Then, each $a_i$ is converted into a normalized value $b_i$ using the following formula: $b_i=\frac{a_i}{(k+\alpha \sum _j a_i^2)^\beta}$ The values of the underlying parameters used in the original paper  are $k = 2, α = 10^{−4}$, and $β = 0.75$. However, in practice, one does not normalize over all $N$ filters. Rather the filters are ordered arbitraril

Hierarchical Feature Engineering

Image
It is instructive to examine the activations of the filters created by real-world images in  different layers. T he  activations of the filters in the early layers are low-level features like edges, whereas those in  later layers put together these low-level features. For example, a mid-level feature might put  together edges to create a hexagon, whereas a higher-level feature might put together the  mid-level hexagons to create a honeycomb. It is fairly easy to see why a low-level filter might  detect edges. Consider a situation in which the color of the image changes along an edge. As a result, the difference between neighboring pixel values will be non-zero only across  the edge. This can be achieved by choosing the appropriate weights in the corresponding  low-level filter. Note that the filter to detect a horizontal edge will not be the same as that  to detect a vertical edge. This brings us back to Hubel and Weisel’s experiments in which  different neurons in the cat’s visual c