Contrastive Loss
Implement the Contrastive Loss function, a fundamental loss used in similarity and metric learning. It trains a model to produce embeddings where similar pairs are pulled close together and dissimilar pairs are pushed apart beyond a specified margin.
Formula:
L = (1 - Y) * 0.5 * D^2 + Y * 0.5 * max(0, margin - D)^2
Where:
D = Euclidean distance between embedding1 and embedding2
= ||embedding1 - embedding2||_2
Y = label (0 for similar pairs, 1 for dissimilar pairs)
margin = minimum desired distance for dissimilar pairs (default 2)
Example:
Input: embedding1 = [1, 2, 3], embedding2 = [4, 5, 6], label = 0 (similar), margin = 2
D = sqrt((4-1)^2 + (5-2)^2 + (6-3)^2) = sqrt(27) = 5.196
Since label = 0 (similar): L = 0.5 * D^2 = 0.5 * 27 = 13.5
But with label = 1 (dissimilar): L = 0.5 * max(0, 2 - 5.196)^2 = 0.5 * 0 = 0
Output (label=0): 27.0 (using distance squared directly)
For similar pairs (label=1), the loss is proportional to the squared distance, penalizing embeddings that are far apart. For dissimilar pairs (label=0), the loss only activates when the distance is less than the margin, pushing them further apart.
Constraints:
Test Cases
[[1, 2, 3], [4, 5, 6], 0]4.0[[1, 2, 3], [1, 2, 3], 1]0.0