effect size d calculator

Cohen's d Calculator (Independent Groups)

Enter summary statistics for two groups to estimate standardized mean difference (Cohen’s d).

Group 1

Group 2

Formula used:
\( s_p = \sqrt{\frac{(n_1-1)s_1^2 + (n_2-1)s_2^2}{n_1+n_2-2}} \),   \( d = \frac{M_1-M_2}{s_p} \)
Enter values for both groups, then click Calculate Effect Size.

What is effect size d?

Cohen’s d is a standardized effect size that expresses the difference between two means in standard deviation units. Unlike a p-value, which tells you whether a difference is statistically detectable, Cohen’s d tells you how large that difference is in practical terms.

For example, a d of 0.50 means the two groups differ by half of one pooled standard deviation. This makes it easier to compare outcomes across different studies, scales, and measures.

How this calculator works

This tool calculates Cohen’s d for two independent groups using:

  • Group means (M1 and M2)
  • Group standard deviations (SD1 and SD2)
  • Group sample sizes (n1 and n2)

It also reports Hedges’ g, which applies a small-sample correction to Cohen’s d. If your total sample is modest, many researchers prefer reporting Hedges’ g alongside or instead of Cohen’s d.

Interpreting Cohen’s d

Common benchmarks are:

  • 0.00 to 0.19: Trivial / very small
  • 0.20 to 0.49: Small
  • 0.50 to 0.79: Medium
  • 0.80 and above: Large

These are only guidelines. In medical, educational, or policy contexts, even a small effect may be meaningful if it affects many people or reduces risk in important ways.

When to use this calculator

Use it when:

  • You are comparing two independent groups (e.g., treatment vs control)
  • You have group means, SDs, and sample sizes
  • You want a standardized estimate of practical impact

Avoid it when:

  • The same participants are measured twice (paired/repeated measures designs need a different approach)
  • Your data are highly non-normal with extreme outliers and no robust checks
  • You need an effect size based on odds ratios, correlations, or nonparametric methods

Why effect size matters more than “significant or not”

Large samples can make tiny differences statistically significant. Small samples can miss meaningful differences. Effect size gives you the missing context by focusing on magnitude. Best practice is to report:

  • The mean difference
  • A confidence interval
  • A p-value (if hypothesis testing is used)
  • An effect size (such as Cohen’s d)

Example reporting language

“Participants in the intervention group scored higher than controls, d = 0.63 (medium effect), indicating a practically meaningful improvement in performance.”

Quick FAQ

Can Cohen’s d be negative?

Yes. The sign depends on the order of subtraction (M1 - M2). A negative value simply means Group 1 scored lower than Group 2.

Should I report absolute value or signed value?

Usually report the signed value to preserve direction, and then explain which group scored higher.

Is Hedges’ g always better?

Not always, but it is often preferred with smaller samples due to reduced upward bias in effect size estimates.

🔗 Related Calculators