Implement **Linear Discriminant Analysis (LDA)** for supervised dimensionality reduction.
Algorithm:
1. Compute the **within-class scatter matrix** S_W:
S_W = sum_over_classes( sum_over_samples_in_class( (x - mean_c)(x - mean_c)^T ) )
2. Compute the **between-class scatter matrix** S_B:
S_B = sum_over_classes( n_c * (mean_c - mean_overall)(mean_c - mean_overall)^T )
3. Compute eigenvalues/eigenvectors of S_W^(-1) @ S_B
4. Select top n_components eigenvectors (sorted by eigenvalue magnitude)
5. Project data onto these discriminant directions
Example:
lda = LDA(n_components=2)
lda.fit(X_train, y_train)
X_projected = lda.transform(X_test) # Reduced dimensions
**Explanation:** Unlike PCA (unsupervised), LDA uses class labels to find directions that maximize class separation while minimizing within-class variance. Maximum useful components = min(n_features, n_classes - 1).
Constraints:
Test Cases
3-class data, n_components=2output shape (n_samples, 2)2-class data, n_components=1output shape (n_samples, 1)linear_discriminants shape after fit(n_components, n_features)