Implement the **Accuracy Score** metric for classification.
Formula:
Accuracy = Number of correct predictions / Total number of predictions
Write a function accuracy(y_true, y_pred) that compares true labels with predicted labels and returns the fraction of correct predictions.
Example:
y_true = [1, 0, 1, 1, 0, 1]
y_pred = [1, 0, 0, 1, 0, 1]
accuracy(y_true, y_pred) → 0.8333
**Explanation:** Out of 6 predictions, 5 are correct (indices 0,1,3,4,5) and 1 is wrong (index 2). Accuracy = 5/6 = 0.8333.
Constraints:
Test Cases
Test Case 1
Input:
y_true=[1,0,1,1,0,1], y_pred=[1,0,0,1,0,1]Expected:
0.8333Test Case 2
Input:
y_true=[1,1,1,1], y_pred=[1,1,1,1]Expected:
1.0Test Case 3
Input:
y_true=[0,0,0], y_pred=[1,1,1]Expected:
0.0+ 2 hidden test cases