Click4Ai

101.

Medium

Multi-Layer Perceptron (MLP)

Implement a Multi-Layer Perceptron with one hidden layer using NumPy. An MLP is a fundamental feedforward neural network where data flows from the input layer through one or more hidden layers to the output layer. Each neuron in a layer is connected to every neuron in the next layer (fully connected).

The forward pass through a single hidden layer MLP is computed as follows:

hidden = sigmoid(X @ W1 + b1)

output = hidden @ W2 + b2

where sigmoid(x) = 1 / (1 + exp(-x))

Your function mlp(X, weights) should apply a sigmoid activation on the hidden layer (computed as the dot product of the input X and the weights), then sum across the hidden units to produce the output for each sample.

Example:

Input: X = [[1, 2], [3, 4]], weights = [[0.1, 0.2], [0.3, 0.4]]

Hidden layer: sigmoid(X @ weights)

Output: sum of hidden layer activations per sample -> [2.5, 6.5]

The sigmoid activation function squashes values into the range (0, 1), introducing non-linearity into the network. Without activation functions, stacking multiple linear layers would be equivalent to a single linear transformation, making the network unable to learn complex patterns.

Constraints:

  • Input shape: (n_samples, n_features)
  • Weights shape: (n_features, n_hidden)
  • Use the sigmoid activation function for the hidden layer
  • Return a 1D array with one output value per sample
  • Test Cases

    Test Case 1
    Input: [[1, 2], [3, 4]]
    Expected: [2.5, 6.5]
    Test Case 2
    Input: [[5, 6], [7, 8]]
    Expected: [11.5, 15.5]
    + 3 hidden test cases