Click4Ai

137.

Easy

L1 Regularization (Lasso)

Implement L1 regularization, a technique that adds a penalty proportional to the **sum of absolute values** of the model weights to the loss function. L1 regularization is known for inducing **sparsity** -- it drives many weights exactly to zero, effectively performing automatic feature selection.

Formula:

L1_penalty = lambda * sum(|w_i|)

Total loss = original_loss + L1_penalty

Where:

lambda (strength) = regularization coefficient (hyperparameter)

|w_i| = absolute value of each weight

sum = sum over all weights in the model

Example:

Input: weights = [1, -2, 3, 0, -0.5], strength = 0.5

L1_penalty = 0.5 * (|1| + |-2| + |3| + |0| + |-0.5|)

= 0.5 * (1 + 2 + 3 + 0 + 0.5)

= 0.5 * 6.5

= 3.25

Output: 3.25

**Explanation:** L1 regularization adds the absolute weight magnitudes to the loss. During gradient descent, the gradient of |w| is +1 or -1 (the sign of w), which pushes weights toward zero by a constant amount regardless of their magnitude. This constant push is why L1 tends to produce sparse solutions -- small weights are driven all the way to exactly zero. This sparsity property makes L1 useful for feature selection and model compression. L1 is also called Lasso regularization in the context of linear regression.

Constraints:

  • \`weights\` is a 1D numpy array of floats (can include negative values)
  • \`strength\` (lambda) is a non-negative float
  • Return a single scalar value representing the L1 penalty
  • Use np.abs and np.sum for computation
  • Test Cases

    Test Case 1
    Input: [[1, 2, 3], 0.5]
    Expected: 3.0
    Test Case 2
    Input: [[4, 5, 6], 1.0]
    Expected: 15.0
    + 3 hidden test cases