Implement a **Linear Support Vector Machine (SVM)** classifier using gradient descent with hinge loss.
Hinge Loss:
For each sample: loss = max(0, 1 - y_i * (w @ x_i + b))
Gradient Update Rules:
If y_i * (w @ x_i + b) >= 1 (correct side with margin):
dw = 2 * lambda * w
db = 0
Else (violation):
dw = 2 * lambda * w - y_i * x_i
db = -y_i
Where lambda_param is the regularization parameter.
Example:
svm = LinearSVM(learning_rate=0.001, lambda_param=0.01)
svm.fit(X_train, y_train) # y must be -1 or +1
predictions = svm.predict(X_test)
**Explanation:** SVM finds the hyperplane that maximizes the margin between classes. Points within or on the wrong side of the margin are penalized by hinge loss.
Constraints:
Test Cases
Test Case 1
Input:
Linearly separable 2D data, y=[-1,-1,1,1]Expected:
Correct predictions on training dataTest Case 2
Input:
predict returns array of -1 and +1 valuesExpected:
TrueTest Case 3
Input:
weights shape matches n_features after fitExpected:
True+ 2 hidden test cases