Underfitting

A Simple Explanation - By Varsha Saini

Usually, the performance of any machine learning model suffers from two major issues, overfitting or underfitting. In this article, we will understand underfitting in detail, the reasons for underfitting, preventions of underfitting, and the concept of bias-variance tradeoff.

Underfitting in Machine Learning

Underfitting is a situation in which a machine learning model does not learn the pattern in the underlying data while training. The relationship between the independent variables and the dependent variable is not accurately captured. The model neither performs well on training data nor on testing data. The bias (error of training data) is high and the variance (error of testing data) is high.

Underfitting usually occurs in an effort to avoid overfitting through a process called Early Stopping.

Reasons for Underfitting

Below are a few reasons that can cause underfitting:

  1. The model is too simple.
  2. The training data is not enough.
  3. The training data contains noise or outliers.

How to Prevent Underfitting

Below is a few methods that can be taken to prevent underfitting:

  1. By Including more features.
  2. Reduce Regularization.
  3. Increasing model complexity.
  4. By increasing the duration of model training.

Bias Variance Tradeoff

Bias and variance are complements of each other. The increase of one will result in the decrease of the other and vice versa.

Bias is the Error in Training Data and Variance is the Error in Testing Data.

  • In Overfitting, the model performs very well on the training dataset hence bias is low and it performs badly on the testing dataset hence variance is high.
  • In Underfitting, the model neither performs well on training data nor on testing data hence both variance and bias are high.
  • For the Balanced model, error on both training and testing should be low hence both bias and variance should be low.

Best Fit Model

  • The best fit model is a generalized model which performs well on both training and testing data.
  • The error on both training and testing data is low.
  • It lies between the underfitted and overfitted models.
  • It has low bias and low variance.