Click4Ai

134.

Hard

Encoder-Decoder Architecture

Implement an Encoder-Decoder architecture using dense (fully connected) layers. The encoder compresses the input into a lower-dimensional latent representation, and the decoder reconstructs an output from that representation. This pattern underpins autoencoders, variational autoencoders (VAEs), and sequence-to-sequence models.

Architecture:

Encoder: Maps input to a latent representation

latent = tanh(input @ W_encoder)

Decoder: Maps latent representation back to output space

output = tanh(latent @ W_decoder)

Shapes:

input: (n_samples, input_dim)

W_encoder: (input_dim, latent_dim)

latent: (n_samples, latent_dim)

W_decoder: (latent_dim, output_dim)

output: (n_samples, output_dim)

Example:

Input: [[1, 2, 3], W_encoder (3x2): W_decoder (2x3):

[4, 5, 6]] [[0.1, 0.2], [[0.3, 0.1, 0.2],

[0.3, 0.4], [0.4, 0.2, 0.1]]

[0.5, 0.6]]

Encoder: latent = tanh(input @ W_encoder)

input @ W_enc = [[1*0.1+2*0.3+3*0.5, 1*0.2+2*0.4+3*0.6],

[4*0.1+5*0.3+6*0.5, 4*0.2+5*0.4+6*0.6]]

= [[2.2, 2.8],

[4.9, 6.4]]

latent = tanh([[2.2, 2.8], [4.9, 6.4]]) = [[0.976, 0.993], [1.000, 1.000]]

Decoder: output = tanh(latent @ W_decoder)

= tanh([[0.976*0.3+0.993*0.4, ...], [...]])

**Explanation:** The Encoder-Decoder pattern is one of the most important architectural paradigms in deep learning. The encoder learns to compress high-dimensional input into a compact latent representation that captures the most essential features. The decoder then learns to reconstruct meaningful output from this compressed representation. In autoencoders, the input and output are the same (reconstruction). In translation models, the encoder processes the source language and the decoder generates the target language.

Constraints:

  • Input is a 2D numpy array of shape (n_samples, input_dim)
  • Encoder weight matrix has shape (input_dim, latent_dim)
  • Decoder weight matrix has shape (latent_dim, output_dim)
  • Use np.tanh as activation for both encoder and decoder
  • Return the decoder output as a 2D numpy array
  • Test Cases

    Test Case 1
    Input: input=[[1,2,3],[4,5,6]], W_enc=(3,2), W_dec=(2,3)
    Expected: shape (2,3) output
    Test Case 2
    Input: input=[[0,0,0]], W_enc=any, W_dec=any
    Expected: [[0,0,...,0]] (tanh(0)=0)
    + 3 hidden test cases