Accuracy Calculator
Enter your confusion matrix values to calculate classification accuracy.
What Is Accuracy?
Accuracy is one of the most common performance metrics in statistics, machine learning, quality control, and testing. In simple terms, it tells you how often your prediction (or decision) is correct.
If you made 100 predictions and 92 were correct, your accuracy is 92%. Because it is easy to understand, accuracy is often the first metric people look at when evaluating a model, an exam score, or a process.
The Basic Formula for Calculation of Accuracy
For binary classification problems, we usually describe outcomes using four values:
- True Positive (TP): Predicted positive and actually positive.
- True Negative (TN): Predicted negative and actually negative.
- False Positive (FP): Predicted positive but actually negative.
- False Negative (FN): Predicted negative but actually positive.
The formula is:
Accuracy = (TP + TN) / (TP + TN + FP + FN)
Multiply the result by 100 to express it as a percentage.
Step-by-Step Example
Suppose a spam filter processes 200 emails:
- TP = 70 (spam correctly flagged as spam)
- TN = 110 (non-spam correctly kept in inbox)
- FP = 10 (good emails incorrectly flagged as spam)
- FN = 10 (spam emails missed by the filter)
Now compute:
- Correct predictions = TP + TN = 70 + 110 = 180
- Total predictions = TP + TN + FP + FN = 200
- Accuracy = 180 / 200 = 0.90 = 90%
You can verify this quickly using the calculator above.
When Accuracy Works Well
Accuracy is especially useful when classes are fairly balanced and the cost of different errors is similar. For example:
- Student quiz scoring (right vs. wrong answers)
- Manufacturing checks where both defect types matter equally
- Simple baseline model comparisons
When Accuracy Can Be Misleading
Accuracy can be deceptive on imbalanced datasets. Imagine disease screening where only 1% of people actually have the disease. A model that predicts “no disease” for everyone would still be 99% accurate, but it would fail to detect the patients who need care.
In those situations, you should also evaluate:
- Precision: How many predicted positives were truly positive?
- Recall (Sensitivity): How many actual positives were correctly detected?
- Specificity: How many actual negatives were correctly detected?
- F1 Score: Harmonic mean of precision and recall.
- Balanced Accuracy: Average of recall and specificity.
Practical Tips for Better Accuracy Analysis
1) Always inspect class balance
Before trusting accuracy, check whether one class dominates the dataset. If one class is very large, high accuracy may hide weak performance on the minority class.
2) Use a confusion matrix
Looking only at one number can hide what kind of mistakes your system is making. A confusion matrix reveals whether false positives or false negatives are driving errors.
3) Track error rate too
Error rate is simply: Error Rate = 1 − Accuracy. Sometimes this is easier to communicate when discussing defects or failures.
4) Align metrics with business impact
In fraud detection, missing fraud (false negatives) is often expensive. In email filtering, blocking legitimate mail (false positives) can be more harmful. Choose metrics that reflect real-world costs.
Accuracy in Everyday Contexts
- Exams: number correct ÷ total questions
- Forecasting: correct directional predictions ÷ total forecasts
- Quality inspection: correctly classified items ÷ total inspected items
- Medical testing: correct diagnoses ÷ all tested cases
Final Thoughts
The calculation of accuracy is straightforward, fast, and useful for first-pass evaluation. Start with accuracy, but do not stop there—especially when data is imbalanced or error costs are unequal. Use the calculator on this page to compute results quickly, then pair that number with deeper metrics for better decisions.