Interactive Power Analysis Tool
This calculator uses a normal approximation for a two-group comparison of means (independent groups, equal variance assumption) with Cohen’s d as the standardized effect size.
Tip: Typical benchmarks for Cohen’s d are 0.2 (small), 0.5 (medium), and 0.8 (large), but context always matters.
What is power analysis?
Power analysis helps you decide how many participants you need before running a study. Instead of guessing sample size, you balance four connected pieces: effect size, sample size, significance level (alpha), and statistical power.
In practical terms, this lets you avoid two expensive mistakes:
- Underpowered studies that miss real effects because the sample is too small.
- Overpowered studies that use more time, money, and participant effort than necessary.
What this calculator does
This page focuses on two independent groups (for example, treatment vs control) using a standardized mean difference model.
You can compute:
- Required sample size: given effect size, alpha, and target power.
- Achieved power: given effect size and sample size.
- Minimum detectable effect (MDE): smallest effect your design can reliably detect at a target power.
How to choose realistic inputs
1) Effect size (Cohen’s d)
Cohen’s d is the difference in means divided by the pooled standard deviation. If prior studies exist, start there. If not, use pilot data or a smallest practically meaningful effect.
2) Alpha
Alpha is your Type I error tolerance (false positive rate). A common choice is 0.05. Lower alpha means stricter evidence requirements and usually larger required sample size.
3) Power
Power is the probability of detecting an effect if it truly exists. Common targets are 0.80 or 0.90. Higher power means lower Type II error but larger sample requirements.
4) Allocation ratio
If groups are unequal in size, use n₂/n₁. Equal allocation (ratio = 1) is often most efficient unless recruitment or intervention costs differ by group.
Interpretation guide
- If required sample size is very large, your expected effect may be too small for your available resources.
- If achieved power is below 0.80, think carefully before claiming a null result.
- If MDE is larger than what is practically useful, redesign may be necessary.
Common pitfalls in sample size planning
Over-optimistic effect sizes
Planning around an unrealistically large effect will underestimate needed sample size. This is one of the most common design errors.
Ignoring missing data and attrition
If you expect dropout or unusable observations, inflate your initial recruitment target. For example, with expected 15% attrition, divide required final sample by 0.85.
Confusing statistical significance with practical significance
A tiny effect can be statistically significant in a large sample. Always interpret effect magnitude and real-world impact, not only the p-value.
Quick workflow for study planning
- Define the smallest effect that matters in your context.
- Set alpha and power based on your field norms and risk tolerance.
- Use the calculator to get sample size.
- Adjust for attrition, exclusions, and feasibility constraints.
- Document assumptions in your protocol or preregistration.
Method note
This implementation uses a normal approximation and is intended for planning and educational use. For exact designs (e.g., non-normal outcomes, clustered designs, repeated measures, survival analysis, or multiple primary endpoints), use specialized methods and consult a statistician.