This method combines both L1 (Lasso) and L2 (Ridge) regularization by adding both absolute and squared penalties to the loss function. It strikes a balance between Ridge and Lasso.
It is particularly useful when you have high-dimensional datasets with highly correlated features.
The Elastic Net loss function is:
$$\text{Loss} = \text{MSE} + \lambda_1 \sum_{i=1}^{n} |w_i| + \lambda_2 \sum_{i=1}^{n} w_i^2$$
where controls the L1 regularization and controls the L2 regularization.