Discriminant analysis (DA) is predicting a categorical output (called a grouping variable) by one or more continuous or binary inputs (called predictor variables).
DAs are easy to compute, inherently multi-class and have no hyperparameters.
Linear Discriminant Analysis (LDA): can learn linear boundaries between two or more categories. It tries to identify attributes that account for the most variance between classes. Adding shrinkage will improve accuracy if the number of training samples is small compared to the number of features (
features / samples > 1).
Quadratic Discriminant Analysis (QDA): can learn quadratic (e.g. ellipsis and parabola) boundaries that are more flexible.
PCA maximizes variance between all data points;
LDA maximizes variance between classes while minimizing variance within each class.