Skip to main content

Statistical nitty-gritty

The fixed-sample, sequential, and Bayesian methods all start from the same place: estimating the mean (μ\mu) and variance (σ2\sigma^2) of the lift; the former is an estimate of the size of the treatment effect (as a percentage of the baseline metric value), and the latter is an indication of how reliable that estimate is for predicting what will happen if you ship the treatment.

Once we have an estimate of the observed lift (both its mean and variance), we can use that to construct a confidence interval that describes the plausible values for the true lift.

Estimating lift

First, some notation:

  • For a given metric, we observe yiy_i for each subject ii in the control group of size nCn_C, and yjy_j for each subject jj in the treatment group of size nTn_T. The set of all observations in the control group is YC\bold{Y}_C, and for the treatment group it is YT\bold{Y}_T.

  • The true population mean (which we don't know!) of the metric among the control group is μC\mu_C, and among the treatment group it is μT\mu_T. Similarly, the true (unobserved) population variance is σC2\sigma^2_C and σT2\sigma^2_T

  • mCm_C and mTm_T are the averages of YC\bold{Y}_C and YT\bold{Y}_T across all subjects in the control group and treatment group, respectively:

    mC=i=1nCyinCmT=j=1nTyjnT\begin{equation} \begin{split} m_C &= \frac{\sum_{i=1}^{n_C}{y_i}}{n_C} \\[1.2em] m_T &= \frac{\sum_{j=1}^{n_T}{y_j}}{n_T} \end{split} \end{equation}
  • sC2s^2_C and sT2s^2_T are the sample variances of YC\bold{Y}_C and YT\bold{Y}_T for the control group and treatment group, respectively:

    sC2=nCnC1(i=1nCyi2nCmC2)sT2=nTnT1(j=1nTyj2nTmT2)\begin{equation} \begin{split} s^2_C &= \frac{n_C}{n_C - 1}\left(\frac{\sum_{i=1}^{n_C}{y_i^2}}{n_C} - m_C^2\right) \\[1.2em] s^2_T &= \frac{n_T}{n_T - 1}\left(\frac{\sum_{j=1}^{n_T}{y_j^2}}{n_T} - m_T^2\right) \end{split} \end{equation}

Regardless of the distribution of YC\bold{Y}_C and YT\bold{Y}_T, the Central Limit Theorem (CLT) says that, when nCn_C and nTn_T are sufficiently large, the mean of YC\bold{Y}_C and YT\bold{Y}_T are each normally distributed; that is:

limnCmC  N(μC;σC2nC)limnTmT  N(μT;σT2nT)\begin{equation} \begin{align*} \lim_{n_C \to \infty}m_C \sim&~~ \mathcal{N}(\mu_C; \frac{\sigma^2_C}{n_C}) \\[1em] \lim_{n_T \to \infty}m_T \sim&~~ \mathcal{N}(\mu_T; \frac{\sigma^2_T}{n_T}) \end{align*} \end{equation}
Ratio metrics

Eppo also supports ratio metrics, where instead of estimating the mean of a single metric, we are interested in understanding the ratio of two means (or, equivalently, the ratio of two sums), such as average order value or time per session. In this case (leaving out the C\small{C} / T\small{T} subscript for brevity), we observe from a sample of size nn the numerator metric yiy_i and the denominator metric ziz_i for each subject (with the set of all subject observations being Y\bold{Y} and Z\bold{Z} respectively) and are trying to estimate:

μratio=μYμZ=i=1nyii=1nzi\mu_{ratio} = \frac{\mu_{\tiny{Y}}}{\mu_{\tiny{Z}}} = \frac{\sum_{i=1}^{n}{y_i}}{\sum_{i=1}^n{z_i}}

As above, μY\mu_{\tiny{Y}} and μZ\mu_{\tiny{Z}} can be estimated by their sample means mYm_{\tiny{Y}} and mZm_{\tiny{Z}}, which are normally distributed around the true values with variances σY2n\frac{\sigma^2_{\tiny{Y}}}{n} and σZ2n\frac{\sigma^2_{\tiny{Z}}}{n}, which in turn can be estimated by the sample variances sYs_{\tiny{Y}} and sZs_{\tiny{Z}}.

Estimating the ratio μ\mu then becomes a matter of dividing the distributions for each component of the ratio; under some reasonable assumptions1, the resulting quotient is also approximately normally distributed:

iYiZDN(μratio, σratio2)\frac{\sum_i{\bold{Y}}}{\sum_i{\bold{Z}}} \xrightarrow{D} \mathcal{N}\big(\mu_{ratio},~\sigma^2_{ratio}\big)

The variance term can be calculated using the delta method (with σ ⁣YZ\sigma_{\!\tiny{\textit{YZ}}} representing the covariance between Y\bold{Y} and Z\bold{Z}) as:

σratio2=(μYμZ)2(σY2μY2+σZ2μZ22σ ⁣YZnμYμZ)\sigma^2_{ratio} = \left( \frac{\mu_{\tiny{Y}}}{\mu_{\tiny{Z}}} \right)^2 \left( \frac{\sigma_{\tiny{Y}}^2}{\mu_{\tiny{Y}}^2} + \frac{\sigma_{\tiny{Z}}^2}{\mu_{\tiny{Z}}^2} - \frac{2\sigma_{\!\tiny{\textit{YZ}}}}{n \mu_{\tiny{Y}} \mu_{\tiny{Z}}} \right)

Note that the extra nn in the denominator of the last term is because we need the covariance between the sample averages.

For each variation, then, we can plug in the sample moments of the numerator and denominator metrics to calculate the values for the ratio:

mratio=mYmZsratio2=(mYmZ)2(sY2mY2+sZ2mZ22s ⁣YZnmYmZ) \begin{align} m_{ratio} &= \frac{m_{\tiny{Y}}}{m_{\tiny{Z}}} \\ s^2_{ratio} &= \left( \frac{m_{\tiny{Y}}}{m_{\tiny{Z}}} \right)^2 \left( \frac{s_{\tiny{Y}}^2}{m_{\tiny{Y}}^2} + \frac{s_{\tiny{Z}}^2}{m_{\tiny{Z}}^2} - \frac{2s_{\!\tiny{\textit{YZ}}}}{n m_{\tiny{Y}} m_{\tiny{Z}}} \right) \end{align}

The below analysis then simply uses these values for ratio metrics instead of the simple sample means and variances for each variation.

Frequentist analysis

We want to calculate the lift, which we'll call Δ\Delta:

Δ=μTμCμC=μTμC1\Delta = \frac{\mu_T - \mu_C}{\mu_C} = \frac{\mu_T}{\mu_C} - 1

But, since we don't know the true values μT\mu_T and μC\mu_C, we'll need to instead estimate the lift. We know from the CLT, as shown in equation 3 above, that mTm_T and mCm_C are approximately normally distributed (for sufficiently large nCn_C and nTn_T); furthermore, since mTm_T and mCm_C are independent, under reasonable assumptions the ratio mTmC\frac{m_T}{m_C} is approximately normal.2 This allows us to model Δ^\hat{\Delta} as a normal distribution:

Δ^  N(μ^Δ,σ^Δ2)μ^Δ=  mTmC1σ^Δ2=  mT2mC2(sC2mC2+sT2mT2)\begin{align} \hat{\Delta} \sim&~~ \mathcal{N}(\hat{\mu}_{\tiny{\Delta}}, \hat{\sigma}^2_{\tiny{\Delta}}) \\[1em] \hat{\mu}_{\tiny{\Delta}} =&~~ \frac{m_T}{m_C} - 1 \\[1em] \hat{\sigma}^2_{\tiny{\Delta}} =&~~ \frac{m_T^2}{m_C^2}\left( \frac{s^2_C}{m_C^2} + \frac{s^2_T}{m_T^2} \right) \end{align}

Note that the calculation of the variance σ^Δ2\hat{\sigma}^2_{\tiny{\Delta}} relies on the delta method.

Thus, we have estimated the lift as being normally distributed with a mean of μ^Δ\hat{\mu}_{\tiny{\Delta}} and a variance of σ^Δ2\hat{\sigma}^2_{\tiny{\Delta}}. For the frequentist methods (fixed-sample and sequential), that's where we end, in terms of estimating the lift (see below for turning that estimate into confidence intervals). For Bayesian, however, there's one more step.


If you are using CUPED, then the estimate of the lift will be a bit more complicated; we still model the lift as a normal distribution, but the mean and variance are computed after using a ridge regression to account for random (that is, not correlated with the treatment assignment) pre-experiment differences between the groups. See the CUPED docs for more information.

Bayesian analysis

In a Bayesian framework, you start with a prior distribution, which describes what you believe before running the experiment. Then, you run the experiment and collect data, which you use to update your prior: in essence, you combine your pre-experiment beliefs about what the lift would be, with the evidence you've gotten from the experiment, into a new set of beliefs, called the posterior (because it comes after gathering data). The estimated average lift is then just the mean of this posterior distribution.

Setting the prior

In our implementation of the Bayesian approach, we use a normal distribution3 as our prior for the lift:

Δ^priorN(μprior=0; σprior2=0.1)\begin{equation} \hat{\Delta}_{prior} \sim \mathcal{N}(\mu_{prior}=0; ~\sigma_{prior}^2 = 0.1) \end{equation}

In other words, our prior is that the lift, on average, will be zero, and that for each metric, about 50% of experiments will show a lift between -21% and +21%, and about 95% of experiments will show a lift between -62% and +62%; from our experience running experiments, this is a fairly conservative prior, as having lifts over ±50% is extremely rare.

Updating the prior

The evidence from the experiment is construed as a normal distribution Δ^\hat{\Delta} just as with the frequentist methods (see equations 2–4 above). However, for the Bayesian method we use this evidence to update the above prior, and the result is our posterior.

Specifically, our posterior is a normal distribution with mean μpost\mu_{post} and variance σpost2\sigma_{post}^2, where:6

μ^post=  1σprior2μprior+1σ^Δ2μ^Δ1σprior2+1σ^Δ2σ^post2=  11σprior2+1σ^Δ2\begin{align} \hat{\mu}_{post} =&~~ \frac{ \frac{ 1 }{ \sigma^2_{prior} } \mu_{prior} + \frac{ 1 }{ \hat{\sigma}^2_{\tiny{\Delta}} } \hat{\mu}_{\tiny{\Delta}} }{ \frac{1}{\sigma^2_{prior}} + \frac{1}{\hat{\sigma}^2_{\tiny{\Delta}}} } \\[2em] \hat{\sigma}^2_{post} =&~~ \frac{1}{\frac{1}{\sigma^2_{prior}} + \frac{1}{\hat{\sigma}^2_{\tiny{\Delta}}}} \end{align}

In other words, our posterior mean is the weighted average of the prior and the observed data, where for each term the weight is the precision (that is, the inverse of the variance). The variance, meanwhile, is related to the harmonic mean of the variances of the prior and the observed data.

Intuition for the posterior

We can also rewrite4 the posterior mean μpost\mu_{post} (equation 6) as:

μ^post=μ^Δσ^Δ2σ^Δ2+σprior2(μ^Δμprior)\begin{equation} \hat{\mu}_{post} = \hat{\mu}_{\tiny{\Delta}} - \frac{ \hat{\sigma}^2_{\tiny{\Delta}} }{ \hat{\sigma}^2_{\tiny{\Delta}} + \sigma^2_{prior} } (\hat{\mu}_{\tiny{\Delta}} - \mu_{prior}) \end{equation}

Note that σ^Δ2σ^Δ2+σprior2\tfrac{\hat{\sigma}^2_{\tiny{\Delta}}}{\hat{\sigma}^2_{\tiny{\Delta}} + \sigma^2_{prior}} reflects how spread out the data are, relative to the prior,5 and (μ^Δμprior)(\hat{\mu}_{\tiny{\Delta}} - \mu_{prior}) is the distance between what we've observed and our prior expectation; thus, we can interpret equation 8 as showing that our posterior lift is the lift we observed in the experiment shrunk toward the prior (that is, toward 0), and that the shrinkage will be larger if our data is noisy (such as happens when we have few observations) and/or our prior is very strong (that is, σprior2\sigma^2_{prior} is low).

Arriving at the posterior

Since our prior is that the lift is zero (that is, μprior=0\mu_{prior}=0), we can simplify the statement of our posterior (equations 7 and 8) to:

Δ^post  N(μpost,σpost2)μ^post=  μ^Δ(1σ^Δ2σ^Δ2+σprior2)σ^post2=  11σprior2+1σ^Δ2\begin{align} \hat\Delta_{post} \sim&~~ \mathcal{N}(\mu_{post}, \sigma_{post}^2) \\[1.5em] \hat{\mu}_{post} =&~~ \hat{\mu}_{\tiny{\Delta}}\left( 1 - \frac{ \hat{\sigma}^2_{\tiny{\Delta}} }{ \hat{\sigma}^2_{\tiny{\Delta}} + \sigma^2_{prior} } \right) \\[1.5em] \hat{\sigma}^2_{post} =&~~ \frac{ 1 }{ \frac{1}{\sigma^2_{prior}} + \frac{1}{\hat{\sigma}^2_{\tiny{\Delta}}} } \end{align}

Confidence intervals

Now that we have (normal) distributions describing our estimate of the lift, that is, an estimate for the mean and variance of the lift, we can construct confidence intervals. The width of those intervals will depend on the confidence level, c(0,1)c \in (0, 1), which represents the desired likelihood of the interval including the true lift.

Frequentist analysis

For frequentist methods, we want to ensure that our confidence interval contains the true lift Δ\Delta with some minimum probability cc (our confidence level).7 Specifically, after tt observations (where t=nC+nTt = n_C + n_T and can be thought of as a function of time), we want to set the lower and upper bounds of our confidence interval (Lt,Ut)(L_t, U_t) such that:8

P(LtcΔtUtc)c\begin{equation} P(L_t^c \le \Delta_t \le U_t^c) \ge c \end{equation}


Fixed-sample analysis assumes that the results are only looked at once, and therefore only a single interval need be constructed—so we can ignore the tt subscripts in the above constraint (eq. 15). Given that we have constructed a normal distribution that describes the lift estimate, we can therefore simply use that distribution's quantile function FΔ1(p)=μ^Δσ^ΔΦ1(p)F_{\tiny{\Delta}}^{-1}(p) = \hat{\mu}_{\tiny{\Delta}} - \hat{\sigma}_{\tiny{\Delta}} \mathop{\Phi^{-1}}(p), where Φ1\mathop{\Phi^{-1}} is the quantile function for the standard normal distribution and p(0,1)p \in (0, 1) is the desired quantile. Specifically, this gives us lower and upper bounds for a given confidence level cc (e.g., c=0.95c = 0.95 for a 95% confidence interval):

Lc=μ^Δσ^Δ zcUc=μ^Δ+σ^Δ zczc=Φ1(1c2)\begin{equation} \begin{split} L^c &= \hat{\mu}_{\tiny{\Delta}} - \hat{\sigma}_{\tiny{\Delta}}~z_c \\ U^c &= \hat{\mu}_{\tiny{\Delta}} + \hat{\sigma}_{\tiny{\Delta}}~z_c \\ z_c &= \mathop{\Phi^{-1}}(\tfrac{1-c}{2}) \end{split} \end{equation}


For the sequential analysis method, we need to ensure that the constraint in equation 15 holds for all tt at once. That is:

P(LtcΔtUtc , t1)cP(L_t^c \le \Delta_t \le U_t^c~,~ \forall t \ge 1) \ge c

This means that we do not have a single confidence interval, but rather a confidence sequence: an infinite sequence of confidence intervals such that, not only is each individual interval valid for controlling the rate of false positives, but the aggregation of all intervals is valid as well. In other words, a confidence sequence provides statistical guarantees while allowing you to peek at the results as many times as you want (including after every single observation) and to stop the experiment at any time.

The method we use for constructing the bounds LtcL_t^c and UtcU_t^c comes from Howard et al., “Time-Uniform, Nonparametric, Nonasymptotic Confidence Sequences” and has the following useful properties:

  1. The statistical guarantees are valid even for very broad assumptions about the underlying distribution of Δ\Delta.
  2. You do not need to predeterimine the sample size.
  3. You can peek at results any number of times, and can decide, based on what you see, to shut down the experiment or keep it running to collect more data.
  4. You can use any stopping rule you like, meaning, for example, you can change the confidence level in the middle of the experiment.
  5. As you collect more data, the width of the intervals will tend to get smaller and smaller, and eventually will approach zero.
  6. Although the confidence intervals are wider than for fixed-sample analysis, the penalty incurred for the additional flexibility and generality is smaller than that from a host of previous methods.

We use a slightly modified version of equation 14 from the reference, with some changes in notation, to construct our bounds.9 Specifically, using the estimated lift μ^Δ\hat{\mu}_{\tiny{\Delta}} from equation 3, the estimated standard error of the lift σ^Δ\hat{\sigma}_{\tiny{\Delta}} from equation 4, the confidence level c(0,1)c \in (0, 1), and the total sample size t=nC+nTt = n_C + n_T :

Ltc=μ^Δσ^Δ BUtc=μ^Δ+σ^Δ BB=1t(t+ρ)log(t+ρρ(1c)2)ρ=Ntunelog(log(e(1c)2))2log(1c)\begin{equation} \begin{split} L_t^c &= \hat{\mu}_{\tiny{\Delta}} - \hat{\sigma}_{\tiny{\Delta}}~B \\ U_t^c &= \hat{\mu}_{\tiny{\Delta}} + \hat{\sigma}_{\tiny{\Delta}}~B \\ B &= \frac{1}{t} \sqrt{ (t + \rho) \log\Bigl(\tfrac{t+\rho}{\rho(1 - c)^2}\Bigr)} \\[1.2em] \rho &= \frac{N_{tune}}{\log\Bigl( \log \bigl( \tfrac{e}{(1-c)^{2}} \bigr) \Bigr)-2\log{(1-c)}} \end{split} \end{equation}

where NtuneN_{tune} is a tuning parameter that is used to determine where the ratio between the sequential confidence interval width and the fixed-sample confidence interval width is minimized; we set Ntune=10,000N_{tune}=10{,}000 to try to minimize the cost (in terms of additional sample needed) of the sequential method around typical A/B test sample sizes.

We have run extensive simulations to validate that these confidence intervals satisfy the specified coverage guarantees.


For simplicity, when referring to the lower and upper bounds around a lift estimate, we generally use the phrase "confidence interval" regardless of which analysis method you might be using; in a Bayesian context, however, the term is a misnomer, as Bayesian methods approach the problem of inference differently than frequentist methods do. Bayesians therefore use credible intervals instead.

Although the epistemological underpinnings for constructing a Bayesian credible interval may share little with the process for constructing a fixed-sample confidence interval, the statistical procedure is nearly identical. That is, given our posterior distribution Δ^post\hat{\Delta}_{post} from equation 9, we simply look at the quantile function for that distribution to set our bounds:

Lc=μ^postσ^post zcUc=μ^post+σ^post zczc=Φ1(1c2)\begin{equation} \begin{split} L^c &= \hat{\mu}_{post} - \hat{\sigma}_{post}~z_c \\ U^c &= \hat{\mu}_{post} + \hat{\sigma}_{post}~z_c \\ z_c &= \mathop{\Phi^{-1}}(\tfrac{1-c}{2}) \end{split} \end{equation}

It is important to note that the constraint in equation 12 (ensuring that the probability that the true lift falls within the bounds (Lc,Uc)(L^c, U^c) is at least as high as our confidence level) does not hold in the Bayesian case, simply because such a constraint is nonsensical in a Bayesian context. Instead, the above bounds in equation 15 (which constitute a credible interval) describe our beliefs about what the lift might plausibly be given our prior and the observed data. In particular, we can say that we expect Δ(Lc,Uc)\Delta \in (L^c, U^c) with probability cc, given our prior and the observed data.

  1. Under reasonable assumptions, we can approximate the ratio of two normal distributions as a normal distribution centered on the ratio of the means. In essence, the approximation requires that the denominator be unlikely to be negative. Since all metrics are positive, the requirement boils down to whether the distribution of the denominator is sufficiently narrow (that is, has sufficiently low variance, relative to the mean). There is a short treatment of this approximation here, and a longer treatment in Díaz-Francés and Rubio (2004), "On the Existence of a Normal Approximation to the Distribution of the Ratio of Two Independent Normal Random Variables."
  2. For more on requirements for this approximation, see note above. In this case, the denominator (which must be unlikely to be negative for the approximation to hold) is the distribution of the treatment metric.
  3. We use a normal distribution because it is a convenient conjugate prior, meaning that we can update it with our (normally distributed) lift estimate and produce another normal distribution. In this case, we are assuming that the variance of the lift is known, that is, that σ^Δ2\hat{\sigma}^2_{\tiny{\Delta}} is accurate. The choice of a wide prior is, in part, designed to compensate for this assumption. Evaluating and improving upon our choice of prior is an area of ongoing research.
  4. For a derivation, see Gelman et al., “Bayesian Data Analysis Third Edition” (2020), §2.5. An alternative derivation is provided in Murphy, “Conjugate Bayesian Analysis of the Gaussian Distribution” (2007).
  5. This section follows Gelman et al., “Bayesian Data Analysis Third Edition” (2020), p. 40, with some slight tweaks to notation and ordering.
  6. In particular, if the sample variance of the data goes to zero (as would happen if our sample size gets very large), so will this quotient, meaning that the prior will have less and less of an effect on the posterior. Furthermore, it will get to zero faster, as we add samples, if our prior is weaker—that is, if it has a higher variance. On the other hand, if we have a strong prior belief, represented by a low value of σprior2\sigma^2_{prior}, then moving this term toward zero requires more data (or, data that doesn't vary much).
  7. This is equivalent to saying that we want to limit the false positive rate to be no more than 1c1-c, which is how this constraint is typically framed in the context of null hypothsesis significance testing.
  8. Note that we do not assume, as is typical, that Δ\Delta is constant across all sample sizes tt. See Howard et al., “Time-Uniform, Nonparametric, Nonasymptotic Confidence Sequences”, p. 19 for a discussion of the implications of assuming that lift is invariant over sample sizes.
  9. In particular, Howard et al. assume unit variance while our lift estimate has variance σ^Δ2\hat{\sigma}^2_{\tiny{\Delta}}, and we set our bounds around the mean estimated lift, rather than the sum.