Naive Bayes Classifiers
Naive Bayes (NB) machine learning classifiers apply Bayes' theorem with the "naive" assumption of independence between the features.
A fruit may be considered to be an apple if it is red, round, and about 10 cm. This classifier considers each of these features to contribute independently to the probability that this fruit is an apple.
NBs require only a small amount of training data, making them attractive in various special cases.
NBs have been especially good in document classification such as spam filtering. Word frequencies are commonly used as the features. NBs have also found applications in medical diagnosis.
Different NB classifiers differ by probability distribution assumption:
- Gaussian Naive Bayes (GNB): assumes the likelihood of the features to follow normal/Gaussian distribution, frequently used with continuous data.
Person is X in height, Y in weight and Z in foot size; is it a male or female?
- Bernoulli Naive Bayes (BNB): assumes the likelihood of the features to follow Bernoulli distribution, thus each feature is assumed to be a binary value.
Does word X appear in the document?
- Multinomial Naive Bayes (MNB): assumes the likelihood of the features to follow multinomial distribution, frequently used with document classification / bag of words.
How many ties does word X appear in the document?