Click4Ai

142.

Easy

Label Smoothing

Implement label smoothing, a regularization technique that prevents a model from becoming overconfident by softening the hard target labels. Instead of assigning probability 1 to the correct class and 0 to all others, label smoothing distributes a small amount of probability mass across all classes.

Formula:

y_smooth = y * (1 - epsilon) + epsilon / num_classes

Where:

y = original one-hot label vector

epsilon = smoothing factor (e.g., 0.1 or 0.3)

num_classes = total number of classes

Example:

Input: label = [0, 0, 1], factor = 0.3 (num_classes = 3)

Output: [0.1, 0.1, 0.8]

Explanation: The true class retains most of the probability: 1 * (1 - 0.3) + 0.3 / 3 = 0.7 + 0.1 = 0.8, while each non-true class receives: 0 * (1 - 0.3) + 0.3 / 3 = 0.1. This prevents the model from assigning full confidence to any single class, improving generalization and calibration.

Constraints:

  • The input label is a 1D NumPy array representing a one-hot encoded vector.
  • `factor` (epsilon) is a float in the range [0, 1).
  • The output must sum to 1.0 (valid probability distribution).
  • Use NumPy for all array operations.
  • Test Cases

    Test Case 1
    Input: [0,0,1]
    Expected: [0.1,0.1,0.8]
    Test Case 2
    Input: [0,1,0]
    Expected: [0.1,0.8,0.1]
    + 3 hidden test cases