Click4Ai

111.

Easy

Early Stopping

Implement early stopping to prevent overfitting during neural network training. Early stopping is a form of regularization that monitors the validation loss during training and halts the process when the validation loss stops improving for a specified number of consecutive epochs (called "patience").

The early stopping algorithm is:

min_val_loss = infinity

patience_counter = 0

for each epoch:

if val_loss < min_val_loss:

min_val_loss = val_loss

patience_counter = 0 # Reset counter on improvement

else:

patience_counter += 1 # Increment counter on no improvement

if patience_counter >= patience:

STOP training at this epoch

Your function early_stopping(train_losses, val_losses, patience) should return the epoch index at which training should stop. If the patience is never exceeded, return the total number of epochs.

Example:

Input: train_losses = [0.5, 0.4, 0.3], val_losses = [0.2, 0.1, 0.15], patience = 1

Epoch 0: val_loss = 0.2 < inf -> min_val_loss = 0.2, counter = 0

Epoch 1: val_loss = 0.1 < 0.2 -> min_val_loss = 0.1, counter = 0

Epoch 2: val_loss = 0.15 >= 0.1 -> counter = 1 >= patience(1) -> STOP

Output: 2 (stop at epoch index 2)

Overfitting occurs when a model learns the noise in the training data rather than the underlying pattern, leading to poor generalization. Early stopping prevents this by stopping training at the point where the model generalizes best, as indicated by the validation loss.

Constraints:

  • Use NumPy for array operations
  • Return the epoch index where training should stop
  • If patience is never exceeded, return the total number of epochs (len(val_losses))
  • Patience counter resets when a new minimum validation loss is found
  • Test Cases

    Test Case 1
    Input: [[0.1,0.2,0.3],[0.05,0.1,0.15]]
    Expected: 2
    Test Case 2
    Input: [[0.5,0.4,0.3],[0.2,0.1,0.15]]
    Expected: 1
    + 3 hidden test cases