calculating accuracy

Accuracy Calculator

Enter values from a confusion matrix to calculate accuracy. Use whole numbers (0 or greater).

Enter your confusion matrix values, then click Calculate Accuracy.

What does “accuracy” actually mean?

Accuracy is the share of predictions that are correct out of all predictions made. In plain language: How often did your model, process, or test get the answer right?

Whether you are grading a quiz, evaluating a medical screening model, reviewing a quality-control process, or benchmarking machine learning output, accuracy is often the first metric people ask for. It is simple, intuitive, and useful—as long as you understand when it can mislead.

The formula for calculating accuracy

In binary classification, we use four outcomes from the confusion matrix:

  • True Positives (TP): Predicted positive, actually positive
  • True Negatives (TN): Predicted negative, actually negative
  • False Positives (FP): Predicted positive, actually negative
  • False Negatives (FN): Predicted negative, actually positive

The accuracy formula is:

Accuracy = (TP + TN) / (TP + TN + FP + FN)

Multiply by 100 to express it as a percentage.

Quick worked example

Suppose your model produces the following results:

  • TP = 45
  • TN = 135
  • FP = 12
  • FN = 8

Total predictions = 45 + 135 + 12 + 8 = 200
Correct predictions = 45 + 135 = 180
Accuracy = 180 / 200 = 0.90 = 90%

When accuracy is useful

Accuracy is a strong summary metric when classes are reasonably balanced and the costs of mistakes are similar. Common examples:

  • Exam scoring where each question has equal weight
  • Routine defect checks in stable production lines
  • Initial baseline comparisons between similar models

When accuracy can be misleading

Accuracy can hide serious problems in imbalanced datasets. Imagine fraud detection where only 1% of transactions are fraudulent. A naive model that predicts “not fraud” every time would still get ~99% accuracy, yet be practically useless.

Use companion metrics

To get a fuller picture, pair accuracy with:

  • Precision: Of predicted positives, how many were truly positive?
  • Recall (Sensitivity): Of actual positives, how many did we catch?
  • Specificity: Of actual negatives, how many were correctly rejected?
  • F1 Score: Harmonic mean of precision and recall

The calculator above also reports these values so you can spot imbalance issues quickly.

Step-by-step process for calculating accuracy correctly

  1. Define what “positive” and “negative” mean in your context.
  2. Build the confusion matrix (TP, TN, FP, FN).
  3. Compute total predictions: TP + TN + FP + FN.
  4. Compute correct predictions: TP + TN.
  5. Divide correct by total, then convert to percentage.
  6. Interpret with precision, recall, and business impact in mind.

Practical interpretation tips

1) Tie the metric to consequences

A false negative in cancer screening is not equivalent to a false positive in a product recommendation system. Accuracy alone does not capture that difference.

2) Compare to a baseline

Always ask: “Better than what?” Compare against majority-class guesses, prior models, or operational thresholds.

3) Track over time

One snapshot can look excellent while performance decays in production. Monitor drift, segment performance, and periodic recalibration.

Bottom line

Calculating accuracy is straightforward, and it remains one of the most useful first checks for model performance. But “high accuracy” is not automatically “high quality.” Use it as a starting point, then validate with precision, recall, specificity, and context-specific cost tradeoffs.

🔗 Related Calculators