Deriving Z-Test Formulas: 1-Sample, 1-Sided

in Derivation, Normal, z-test, one-sample, one-sided

Setup

We will derive the formulas for three situations: Normal, Binomial, and Poisson data. The data are $n$ independent and identically distributed random variables $$Y_1, Y_2, \dots, Y_n,$$ and we wish to compare the hypotheses $$H_0: \mu=\mu_0$$ $$H_1: \mu\gt\mu_0$$ where $\mu$ is the mean of the data's distribution, and $\mu_0$ is a specified comparison value. The test statistic will be the sample average $$X=\bar{Y}=\frac{1}{n}\sum_{i=1}^{n}{Y_i},$$ which in each case (Normal, Binomial, Poisson) is a normal random variable; the mean and variance of $X$ depends on the data's distribution. We reject $H_0$ if $X$ is "too big", and accept $H_0$ otherwise.

Critical Value and Accept/Reject Regions

First, let's determine the critical value. Since $X$ is a normal random variable, it could (theoretically) take the value of any real number. Since we're testing whether the mean is larger than $\mu_0$, we'll reject the null hypothesis if $X$ turns out to be "too big". We'll need a critical value, $cv$ that serves as the cut-off point; that is, if our observed value of $X$ turns out to be larger than $cv$, then we'll reject the null hypothesis. If $cv$ is very large, then it'll be very unlikely to observe a value of $X$ larger than $cv$. On the other hand, if $cv$ is too small, i.e. too close to $\mu_0$, then it'll be much more likely to observe a value of $X$ larger than $cv$.

So, would we rather have a large or small critical value, and how do we decide? This is based on our desired Type I error rate, $\alpha$, which is often set at the familiar value of 0.05, or 5%. Specifically, if the null hypothesis is true, we want the probability of rejecting the null hypothesis to be $\alpha$. We can write this in symbols as $$\alpha=Pr(X\ge cv|H_0).$$ Using a little basic algebra and a few useful properties of the normal distribution we can find the value of $cv$ as follows: $$ \begin{align} \alpha & = Pr(X\ge cv|H_0) & \small\text{Definition of Type I error}\\ & = 1-Pr(X\le cv|H_0) & \small\text{Probabilities sum to 1}\\ & = 1-Pr\left(\frac{\displaystyle X-\mu_0}{\displaystyle\sigma_n}\le \frac{\displaystyle cv-\mu_0}{\displaystyle\sigma_n} \Big\vert H_0\right) & \small\text{Standardize both sides of the inequality}\\ & = 1-\Phi\left(\frac{\displaystyle cv-\mu_0}{\displaystyle\sigma_n}\right) & \small\text{Definition of standard normal distribution function}\\ 1-\alpha & = \Phi\left(\frac{\displaystyle cv-\mu_0}{\displaystyle\sigma_n}\right) & \small\text{A little algebra}\\ z_{1-\alpha} & = \frac{\displaystyle cv-\mu_0}{\displaystyle\sigma_n} & \small\text{Definition of standard normal quantiles}\\ cv & = \mu_0 + z_{1-\alpha}\,\sigma_n & \small\text{A little more algebra}\\ \end{align} $$

With our critical value we have defined the following acceptance and rejection regions: $$ \begin{align} \text{Acceptance Region:} & \text{Accept } H_0 \text{ if } X \lt \mu_0 + z_{1-\alpha}\,\sigma_n\\ \text{Rejection Region:} & \text{Reject } H_0 \text{ if } X \ge \mu_0 + z_{1-\alpha}\,\sigma_n \end{align} $$

Power

We can derive the power formula in a manner very similar to the way we derived the critical value above. Since power is defined as the probability of rejecting the null hypothesis given that the alternative is true, we may proceed as follows: $$ \begin{align} \text{Power} & = Pr(X\ge \mu_0 + z_{1-\alpha}\,\sigma_n|H_1) & \small\text{Definition of power}\\ & = 1-Pr(X\le \mu_0 + z_{1-\alpha}\,\sigma_n|H_1) & \small\text{Probabilities sum to 1}\\ & = 1-Pr\left(\frac{\displaystyle X-\mu}{\displaystyle\sigma_n}\le \frac{\displaystyle \mu_0 + z_{1-\alpha}\,\sigma_n-\mu}{\displaystyle\sigma_n} \Big\vert H_1\right) & \small\text{Standardize both sides of the inequality}\\ & = 1-Pr\left(\frac{\displaystyle X-\mu}{\displaystyle\sigma_n}\le \frac{\displaystyle \mu_0-\mu}{\displaystyle\sigma_n}+ z_{1-\alpha} \Big\vert H_1\right) & \small\text{A little algebra}\\ & = 1-\Phi\left(\frac{\displaystyle \mu_0-\mu}{\displaystyle\sigma_n}+ z_{1-\alpha}\right) & \small\text{Definition of standard normal distribution function}\\ & = \Phi\left(\frac{\displaystyle \mu-\mu_0}{\displaystyle\sigma_n}- z_{1-\alpha}\right) & 1-\Phi(x)=\Phi(-x) \;;\;z_a=-z_{1-a}\\ \end{align} $$

Sample Size

A formula for sample size can be obtained by algebraically solving for $n$ in the above power formula. Here, we'll solve for $\sigma_n$, and below we'll fill in the specific form of $\sigma_n$ for the cases when the data are normal, binomial, and Poisson. First, note that the typical notation is Power = $1-\beta$, where $\beta$ is the Type II error rate. We can proceed as follows: $$ \begin{align} 1-\beta & = \Phi\left(\frac{\displaystyle \mu-\mu_0}{\displaystyle\sigma_n}- z_{1-\alpha}\right) & \small\text{Power formula from above}\\ z_{1-\beta} & = \frac{\displaystyle \mu-\mu_0}{\displaystyle\sigma_n}- z_{1-\alpha} & \small\text{Definition of standard normal quantiles}\\ \frac{\displaystyle 1}{\displaystyle \sigma_n} & = \frac{\displaystyle z_{1-\beta}+z_{1-\alpha}}{\displaystyle \mu-\mu_0} & \small\text{A little algebra}\\ \end{align} $$

Normal Data -- Testing a Mean

Suppose the data are $Y_1, Y_2, \dots, Y_n \overset{iid}\sim N(\mu,\sigma^2)$. Then the test statistic is the average, $X=\bar{Y}=\frac{1}{n}\sum_{i=1}^{n}{Y_i},$ and we know that $$\bar{Y}\sim N(\mu,\sigma^2/n).$$ Thus, we replace $\sigma_n$ with $\sigma/\sqrt{n}$ in the above power and sample size formulas to obtain $$ \begin{align} \text{Power} = \Phi\left(\frac{\displaystyle \mu-\mu_0}{\displaystyle\sigma/\sqrt{n}}- z_{1-\alpha}\right) \end{align} $$ and $$ \begin{align} n = \left(\sigma\frac{\displaystyle z_{1-\beta}+z_{1-\alpha}}{\displaystyle \mu-\mu_0}\right)^{ 2} \end{align} $$

Binomial Data -- Testing a Proportion

Suppose the data $Y_1, Y_2, \dots, Y_n$ represent $n$ independent binary outcomes, each with success probability $p$. Then the test statistic is the sample proportion, $X=\hat{p}=\frac{1}{n}\sum_{i=1}^{n}{Y_i}$, which is also the maximum likelihood estimator of $p$, and for large $n$ has the approximate distribution of $$\hat{p}\sim N\left(p,\frac{\displaystyle p(1-p)}{\displaystyle n}\right).$$ Thus, in the above power and sample size formulas we replace $\sigma_n$ with $\sqrt{p(1-p)/n}$, and $\mu$ with $p$ to obtain $$ \begin{align} \text{Power} = \Phi\left(\frac{\displaystyle p-p_0}{\displaystyle\sqrt{p(1-p)/n}}- z_{1-\alpha}\right) \end{align} $$ and $$ \begin{align} n = p(1-p)\left(\frac{\displaystyle z_{1-\beta}+z_{1-\alpha}}{\displaystyle p-p_0}\right)^{ 2} \end{align} $$

Poisson Data -- Testing a Rate

Suppose the data $Y_1, Y_2, \dots, Y_n$ represent $n$ independent Poisson random variables, each with rate $\lambda$. Then the test statistic is the sample average, $X=\hat{\lambda}=\frac{1}{n}\sum_{i=1}^{n}{Y_i}$, which is also the maximum likelihood estimator of $\lambda$, and for large $n$ has the approximate distribution of $$\hat{\lambda}\sim N\left(\lambda,\frac{\displaystyle \lambda}{\displaystyle n}\right).$$ Thus, in the above power and sample size formulas we replace $\sigma_n$ with $\sqrt{\lambda/n}$, and $\mu$ with $\lambda$ to obtain $$ \begin{align} \text{Power} = \Phi\left(\frac{\displaystyle \lambda-\lambda_0}{\displaystyle\sqrt{\lambda/n}}- z_{1-\alpha}\right) \end{align} $$ and $$ \begin{align} n = \lambda\left(\frac{\displaystyle z_{1-\beta}+z_{1-\alpha}}{\displaystyle \lambda-\lambda_0}\right)^{ 2} \end{align} $$