Laura Diane Hamilton

Technical Product Manager at Groupon

Resumé

The Inductive Biases of Various Machine Learning Algorithms

Every machine learning algorithm with any ability to generalize beyond the training data that it sees has, by definition, some type of inductive bias.

That is, there is some fundamental assumption or set of assumptions that the learner makes about the target function that enables it to generalize beyond the training data.

Below is a chart that shows the inductive biases for various machine learning algorithms:

AlgorithmInductive Bias
Rote-LearnerNone
Candidate-EliminationThe target concept c is contained in the hypothesis space H.
Find-SThe target concept can be described in its hypothesis space. All instances are negative instances unless demonstrated otherwise.
Linear RegressionThe relationship between the attributes x and the output y is linear. The goal is to minimize the sum of squared errors.
Decision TreesShorter trees are preferred over longer trees. Trees that place high information gain attributes close to the root are preferred over those that do not.
Single-Unit PerceptronEach input votes independently toward the final classification (interactions between inputs are not possible).
Neural Networks with BackpropagationSmooth interpolation between data points.
K-Nearest NeighborsThe classification of an instance x will be most similar to the classification of other instances that are nearby in Euclidean distance.
Support Vector MachinesDistinct classes tend to be separated by wide margins.
Naive BayesEach input depends only on the output class or label; the inputs are independent from each other.

Lauradhamilton.com is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com.