Dropout Layer
Implement a Dropout layer for a deep learning model. Dropout is a regularization technique that randomly sets a fraction of the input neurons to zero during training. This prevents neurons from co-adapting and forces the network to learn more robust features.
The Dropout operation is computed as follows:
# During training:
mask = random_values > dropout_rate # Binary mask (0s and 1s)
output = input * mask / (1 - dropout_rate) # Inverted dropout scaling
# During inference:
output = input # No dropout applied
Your function dropout_layer(input_array, dropout_rate) should generate a random binary mask where each element has a probability dropout_rate of being zeroed out, and apply that mask to the input array.
Example:
Input: input_array = [[1, 2], [3, 4]], dropout_rate = 0.5
Mask (random): [[1, 0], [1, 0]]
Output: [[1, 0], [3, 0]] (elements where mask is 0 are dropped)
Inverted dropout scales the remaining activations by 1 / (1 - dropout_rate) during training so that the expected value of each neuron remains the same. This means no adjustment is needed during inference, simplifying deployment.
Constraints:
Test Cases
[[1, 2], [3, 4]][[1, 0], [3, 0]][[5, 6], [7, 8]][[0, 6], [0, 8]]