Neural Networks - Architectures
Updated at 2019-01-31 23:35
Neural networks have an architecture aka. the layout. Architecture usually describes:
- Input layer, output layer and the number of hidden layers.
- How many neurons are there in each layer.
- How are the neurons connected. Usually all neurons of a layer are used as input to ALL neurons of the next layer, but not always.
- What activation function is used in each layer or neuron.
- Describes loops if there are any.
A column of parallel neurons is called a layer. Layering neurons allows making more complex and abstract decisions.
Input Layer => Hidden Layer 1 => Output layer
Bottleneck layer is the layer just before the final output layer which e.g. does the actual classification. It's called bottleneck because data representation much more compact than in the layers deeper in the network.
Common neural network categories are:
- Shallow Neural Network: Network with a single hidden layer
- Deep Neural Network: Networks with two or more hidden layers. Modern deep learning uses multiple hundreds of layers.
- Feedforward Neural Network: Network with no loops in the network.
- Recurrent Neural Network: Network with potential loops. In RNNs, neurons fire for some limited duration of time.
- Autoencoder Network: The purpose of autoencoder networks is reconstructing its own inputs. It essentially learns a new representation for a set of data, typically for dimensionality reduction. Autoencoders are frequently chained e.g.
pixels > corners > nose edges > noses
.
Sources
- Using neural nets to recognize handwritten digits
- Role of Bias in Neural Networks
- Activation Functions in Neural Networks
- The Master Algorithm, Pedro Domingos