ToolPilot

A/B Test Sample Size Calculator

Calculate the sample size needed for a statistically reliable A/B test. Estimate test duration based on your daily traffic.

Test parameters

E.g.: 5% means 5 conversions per 100 visitors

Expected relative improvement (5.0% → 5.5%)

Probability of not declaring a false positive

Probability of detecting a true effect

Duration estimate

Everything about A/B test sample size calculation

Why calculate sample size before an A/B test?

Running an A/B test without knowing the required sample size is like flipping a coin: you risk drawing conclusions too early, whether declaring a winner or a loser, without statistical backing.

A pre-test calculation ensures proper statistical power. You'll know exactly how many visitors to allocate between control and variant before making a business decision.

Our calculator is completely free, runs in your browser, and sends no data. Enter your current conversion rate, expected improvement, and confidence level to instantly get the required sample size.

Who uses this calculator?

Growth marketers
Plan A/B tests on landing pages, CTAs, and forms with precise visitor counts to get reliable results.
Product managers
Estimate test duration before launching a sprint. Anticipate the traffic needed to validate a product hypothesis.
Data analysts
Quickly verify the statistical feasibility of a test before mobilizing development teams.
Digital agencies
Present clients with a realistic test timeline based on actual traffic and defined conversion goals.

How to use the sample size calculator

Enter your current conversion rate (e.g., 3.5%).

Set the minimum detectable effect (MDE) and confidence level (typically 95%).

The calculator instantly displays the number of visitors per variant and estimated test duration based on your daily traffic.

Frequently asked questions

What is sample size in an A/B test?
Sample size is the minimum number of visitors each variant (control and test) must receive for statistically significant results. It depends on the baseline conversion rate, minimum detectable effect, and chosen confidence level.
What confidence level should I choose for an A/B test?
The industry standard is 95%, meaning there's only a 5% chance of a false positive. For high-impact financial decisions, some prefer 99%. A higher confidence level requires a larger sample.
How long does it take to reach statistical significance?
Duration depends on your daily traffic and the required sample size. For example, if you need 10,000 visitors per variant and receive 1,000 daily visitors, it will take about 20 days (as traffic is split between variants).
Can I stop an A/B test early if results look clear?
No, this is a common mistake called the 'peeking problem'. Stopping a test before reaching the calculated sample size significantly increases the risk of false positives. Always wait until completion or use appropriate sequential testing methods.
Is my data sent to a server?
No. All calculations run locally in your browser using JavaScript. No data is transmitted, stored, or collected. Your privacy is fully respected.

Understanding A/B test sizing

What is an A/B test and why does sample size matter?

An A/B test (or split test) compares two versions of a page, email, or UI element to determine which performs better. Sample size is critical because it ensures the observed difference between variants is real and not due to chance. Without sufficient sample size, you risk making decisions based on statistical noise rather than reliable signal.

How does statistical power influence an A/B test?

Statistical power (typically set at 80%) represents the probability of detecting a real difference when one exists. Insufficient power increases the risk of false negatives: concluding no difference exists when an improvement is actually present. Increasing power requires a larger sample, extending the test but strengthening conclusion reliability.

What formulas are used to calculate sample size?

The calculation uses the classic formula involving Z-scores for confidence level (α) and power (1-β), expected conversion proportions (p₁ and p₂), and estimated variance. Specifically: n = (Zα/2 + Zβ)² × (p₁(1-p₁) + p₂(1-p₂)) / (p₁ - p₂)². This formula comes from hypothesis testing theory and applies to binomial proportion comparisons between two independent groups.