Click4Ai

149.

Hard

Pruning Neural Networks

Implement magnitude-based weight pruning for neural networks. Pruning removes weights whose absolute values fall below a given threshold by setting them to zero, creating a sparse network. This reduces model size and computational cost while aiming to preserve accuracy.

Algorithm (Magnitude Pruning):

1. For each weight w in the weight matrix:

- If |w| < threshold: set w = 0 (prune)

- Otherwise: keep w unchanged

2. Equivalently: mask = |weights| >= threshold

pruned_weights = weights * mask

Using NumPy:

pruned = np.where(np.abs(weights) < threshold, 0, weights)

Example:

Input: weights = [[0.1, 0.2],

[0.3, 0.4]], threshold = 0.15

|0.1| = 0.1 < 0.15 -> pruned to 0

|0.2| = 0.2 >= 0.15 -> kept

|0.3| = 0.3 >= 0.15 -> kept

|0.4| = 0.4 >= 0.15 -> kept

Output: [[0, 0.2],

[0.3, 0.4]]

Small-magnitude weights contribute less to the network output and can often be removed without significantly hurting performance. After pruning, the network can be fine-tuned to recover any lost accuracy. The resulting sparse weight matrices can be stored more efficiently and enable faster inference.

Constraints:

  • `weights` is a 2D NumPy array of floats.
  • `threshold` is a non-negative float.
  • Comparison uses absolute values of weights.
  • Pruned weights must be set exactly to 0.
  • Use NumPy for all operations.
  • Test Cases

    Test Case 1
    Input: [[0.1,0.2],[0.3,0.4]]
    Expected: [[0,0.2],[0.3,0.4]]
    Test Case 2
    Input: [[0.5,0.6],[0.7,0.8]]
    Expected: [[0.5,0.6],[0.7,0.8]]
    + 3 hidden test cases