Every machine learning algorithm with any ability to generalize beyond the training data that it sees has, by definition, some type of inductive bias.

That is, there is some fundamental assumption or set of assumptions that the learner makes about the target function that enables it to generalize beyond the training data.

Below is a chart that shows the inductive biases for various machine learning algorithms:

Algorithm | Inductive Bias |
---|---|

Rote-Learner | None |

Candidate-Elimination | The target concept c is contained in the hypothesis space H. |

Find-S | The target concept can be described in its hypothesis space. All instances are negative instances unless demonstrated otherwise. |

Linear Regression | The relationship between the attributes x and the output y is linear. The goal is to minimize the sum of squared errors. |

Decision Trees | Shorter trees are preferred over longer trees. Trees that place high information gain attributes close to the root are preferred over those that do not. |

Single-Unit Perceptron | Each input votes independently toward the final classification (interactions between inputs are not possible). |

Neural Networks with Backpropagation | Smooth interpolation between data points. |

K-Nearest Neighbors | The classification of an instance x will be most similar to the classification of other instances that are nearby in Euclidean distance. |

Support Vector Machines | Distinct classes tend to be separated by wide margins. |

Naive Bayes | Each input depends only on the output class or label; the inputs are independent from each other. |