### Problem: Adversarial Training
In this problem, we will implement adversarial training to improve the robustness of a deep learning model.
**Example:** Consider a simple neural network that classifies images into two classes. We want to train this network to be robust against adversarial attacks.
**Constraints:** Use NumPy to generate random images and labels. Implement the FGSM (Fast Gradient Sign Method) attack to generate adversarial examples.
Test Cases
Test Case 1
Input:
[[0.5, 0.5, 0.5], [0.5, 0.5, 0.5]]Expected:
[[0.51, 0.51, 0.51], [0.51, 0.51, 0.51]]Test Case 2
Input:
[[0.2, 0.2, 0.2], [0.2, 0.2, 0.2]]Expected:
[[0.21, 0.21, 0.21], [0.21, 0.21, 0.21]]+ 3 hidden test cases