In compromised power analyses, \(\alpha\) and \(1-\beta\) are computed as functions of the effect size, sample size, model, and most importantly the error probability ratio \(q=\frac{\beta}{\alpha}\) to indicate a type of trade-off between Type I and II errors. Setting \(\alpha=\beta\) results in a \(q=1\), indicating an equal trade-off between Type I and Type II errors, while a \(q=4\) indicates that Type II errors are four-times as costly to make than a Type I error (hence, rejecting the null when it is in fact true is worse than retaining the null hypothesis when it is in fact false). This is particularly useful when the sample size required from prior power analysis suggest a much larger size than a researcher can afford.
For instance, suppose that only \(N=100\) per group are possible for an independent samples \(t\)-test analysis with a Welch correction, and the true effect size is thought to be \(d=.3\). In such a simulation, and assuming for the moment that \(\alpha=.05\) (will be changed momentarily thanks to the store_results = TRUE flag), then the following indicates the \(q=\frac{\beta}{\alpha}\) error ratio estimate for this fixed \(\alpha\) level.
However, because store_results=TRUE is used (the default) the results can be reSummarise()ed using a different \(\alpha\) cut-off, where it is instead possible to obtain some target \(q\) given the stored stimulation results. For example,
# compute beta/alpha ratio given different alphacompromise <-function(alpha, sim, Design){ Design$alpha <- alpha out <-reSummarise(Summarise, results=sim, Design=Design) out$q}# more liberal Type I error (but lower Type II error)compromise(.3, sim=sim, Design=Design)
[1] 0.463
# more conservative Type I error (but higher Type II error)compromise(.01, sim=sim, Design=Design)
[1] 68.2
which indicates different \(q\) ratios. If a specific ratio is desired, then root-solving methods can be used to obtain the desired \(f(\alpha) = q\).
Hence, when equal \(\beta\) and \(\alpha\) errors are desirable then the \(\alpha\) to utilize is 0.2, while if the Type I errors are 4 times more costly than the Type II errors then \(\alpha\) should be selected to be approximately 0.086.
Compromise analysis with empirical \(\alpha\) estimate
In situations where the Type I error rate controlled by a select \(\alpha\), but the true/empirical Type I error rate associated with this \(\alpha\) is in fact sub-optimal (e.g., small samples that utilize maximum-likelihood estimators), then it is possible to define the compromise ratio \(q = \frac{\beta}{\alpha}\) in terms of the empirical Type I error rate rather than the assumed nominal \(\alpha\). To do so requires more computation as the null model must also be generated and analysed, however the resulting compromise ratio and root-solved cut-offs should perform more honestly in practice. Below is an example which utilizes the empirical Type I error estimate rather than the assume Type I error = \(\alpha\).
###################### Same as above, however if Type I error not nominal then may wish to use # empirical Type I error estimate insteadlibrary(SimDesign)Design <-createDesign(N =100,d = .3, alpha = .05)Design
# A tibble: 1 × 3
N d alpha
<dbl> <dbl> <dbl>
1 100 0.3 0.05
# compute beta/alpha ratio given different alphacompromise <-function(alpha, sim, Design){ Design$alpha <- alpha out <-reSummarise(Summarise, results=sim, Design=Design) out$q}compromise(.3, sim=sim, Design=Design)
Based on the empirical \(\hat{\alpha}\) and \(\hat{\beta}\) estimates, when equal \(\beta\) and \(\alpha\) errors are desirable then the \(\alpha\) to utilize is 0.2, while if the Type I errors are 4 times more costly than the Type II errors then \(\alpha\) should be selected to be approximately 0.087. These agree with the results above for this particular analysis because the empirical \(\alpha\) closely matches the theoretical \(\alpha\), however in situations where these quantities do not match the solutions can and will differ.