Wasserstein GAN (WGAN) is a variant of Generative Adversarial Networks (GANs) that uses the Wasserstein distance as a loss function instead of the Jensen-Shannon divergence. This approach helps stabilize the training process and improves the quality of generated samples. In this problem, you will implement a WGAN with a critic network that uses the Leaky ReLU activation function and a generator network that uses the Tanh activation function.
Example:
Suppose we have a dataset of 1000 images of size 64x64. We want to train a WGAN to generate new images that are similar to the training data.
Constraints:
Test Cases
Test Case 1
Input:
[[1, 2], [3, 4]]Expected:
NoneTest Case 2
Input:
[[5, 6], [7, 8]]Expected:
None+ 3 hidden test cases