If your machine learning model learns the pattern of data very well while training, it can lead to the problem of **Overfitting. **Overfitting is a situation in which models perform very well on the training data and poor on testing data. To resolve this issue, a penalty term can be added to the equation of the model, this process is called **Regularization.**

#### Lasso Regression

**Lasso Regression is a regularization method in which a small bias is added to the linear equation such that the overall complexity of the model is reduced.**- It stands for
**Least Absolute and Selection Operator.** - It is also called
**L1 Regularization**since the penalty added is absolute value(power 1)**.** - It is used as a method for
**Feature Selection,**the features for which the weight is very low can be dropped. - It can handle data that suffers from
**Multicollinearity****problem**.

#### Cost Function of Linear Regression

**Cost =**

#### Cost Function of Lasso Regression

The cost function of Linear Regression is modified by adding a Regularization term to it.

**Cost =**

where- slope
**m**is the coefficients (weight) of the independent features. - lambda
**λ**is the penalty term.

#### Idea

- Regularisation parameter is added to the cost function such that when an optimization algorithm like gradient descent is used to minimize the cost function, the values of weights are reduced.
- Reducing weights decreases the complexity of the model, reduces variance, and hence resolves the Overfitting problem.

#### Controlling the Regularization Parameter

- Lambda
**λ**which denotes the penalty term in the equation can be used to control the regularization parameter. - A
**high value of lambda**increases the regularization parameter and decreases the value of coefficient or weight. It can**increase bias**and lead to the problem of**Underfitting**. - A
**low value of lambda**reduces the regularization parameter and does not have much effect on the value of coefficient or weight. Hence the problem of**Overfitting**may not get resolved as variance will not be reduced. - Finding an optimal value of λ is a must to achieve a perfect Lasso Regression model.