Catastrophic Forgetting
**Problem:** Implement a simple experience replay buffer to mitigate catastrophic forgetting in a neural network.
**Example:** Suppose we have a neural network that is trained on a sequence of tasks. Each task consists of a set of input-output pairs. The network is trained on the first task, then on the second task, and so on. However, after training on the second task, the network has forgotten how to perform the first task. This is an example of catastrophic forgetting. To prevent this, we can store the input-output pairs of the first task in a buffer and occasionally sample from the buffer to fine-tune the network on the first task.
**Constraints:** The buffer should store the input-output pairs of all tasks. The network should be fine-tuned on the sampled input-output pairs from the buffer.
Test Cases
[[[1, 2], [3, 4]], [[5, 6], [7, 8]]][[1, 2], [3, 4], [5, 6], [7, 8]][[[9, 10], [11, 12]], [[13, 14], [15, 16]]][[9, 10], [11, 12], [13, 14], [15, 16]]