Connectionism is a set of approaches that models mental or behavioral phenomena as the emergent processes of interconnected networks of simple units. The most common forms use neural network models.
- Neuron: TODO
- Layer: TODO, a set of neurons,
- Deep learning: that you have input layer, output layer and some hidden layers.
Deep Learning Architectures:
- Boltzmann Machines: TODO
- Restricted Boltzmann Machines: TODO
- Deep Belief Network: TODO
- Deep Boltzmann Machine: DBN + Stacked RBM + Greedy Layout Wise Algorithm
Convolutional Neural Networks:
- Individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.
- Neuron collections which look at a specific portion of the input image are called reception fields.
- May include local or global pooling layers which combine the outputs of neuron clusters.
- Convolutional part is where outputs are grouped, fully connected part is where all neurons of a layer are connected to each neuron of the next layer.
Convolutional Deep Belief Networks: CNN + Greedy Layer Wise of DBN.
- DBNs might work better for non-vision related problems.
- CNNs should work better on visual problems.
Unsupervised Greedy Layer Wise Training
Train the first hidden layer and reconstruct the input based upon the hidden layer's weights.
Train the second hidden layer using the input from the previous hidden layer and reconstruct first layer from the inputs.
Continue to go through each hidden layer as in step 2 until we reach the final output layer.
Recalibration of weights happens on all layers each step.
- Input -> 1H -> Input Reconstruction
- Input -> 1H -> 2H -> 1H Reconstruction
- Input -> 1H -> 2H -> 1H Reconstruction -> Output
- Requires a lot of data, can be unlabeled.
- Requires more GPU and CPU.