Tukey on Decisions and Conclusions

In 1955 Tukey gave a dinner talk about the difference between decisions and conclusions at a meeting of the Section of Physical and Engineering Science of the American Statistical Association. The talk was published in 1960. The distinction relates directly to different goals researchers might have when they collect data. This blog is largely a summary of his paper.

 

Tukey was concerned about the ‘tendency of decision theory to attempt to conquest all of statistics’. In hindsight, he needn’t have worried. In the social sciences, most statistics textbooks do not even discuss decision theory. His goal was to distinguish decisions from conclusions, to carve out a space for ‘conclusion theory’ to complement decision theory. He distinguishes decisions from conclusions.

 

In practice, making a decision means to ‘decide to act for the present as if’. Possible actions are defined, possible states of nature identified, and we make an inference about each state of nature. Decisions can be made even when we remain extremely uncertain about any ‘truth’. Indeed, in extreme cases we can even make decisions without access to any data. We might even decide to act as if two mutually exclusive states of nature are true! For example, we might buy a train ticket for a holiday three months from now, but also take out life insurance in case we die tomorrow.   

 

Conclusions differ from decisions. First, conclusions are established without taking consequences into consideration. Second, conclusions are used to build up a ‘fairly well-established body of knowledge’. As Tukey writes: “A conclusion is a statement which is to be accepted as applicable to the conditions of an experiment or observation unless and until unusually strong evidence to the contrary arises.” A conclusion is not a decision on how to act in the present. Conclusions are to be accepted, and thereby incorporated into what Frick (1996) calls a ‘corpus of findings’. According to Tukey, conclusions are used to narrow down the number of working hypotheses still considered consistent with observations. Conclusions should be reached, not based on their consequences, but because of their lasting (but not everlasting, as conclusions can now and then be overturned by new evidence) contribution to scientific knowledge.

 

Tests of hypotheses

 

According to Tukey, a test of hypotheses can have two functions. The first function is as a decision procedure, and the second function is to reach a conclusion. In a decision procedure the goal is to choose a course of action given an acceptable risk. This risk can be high. For example, a researcher might decide not to pursue a research idea after a first study, designed to have 80% power for a smallest effect size of interest, yields a non-significant result. The error rate is at most 20%, but the researcher might have enough good research ideas to not care.

 

The second function is to reach a conclusion. This is done, according to Tukey, by controlling the Type 1 and Type 2 error rate at ‘suitably low levels’ (Note: Tukey’s discussion of concluding an effect is absent is hindered somewhat by the fact that equivalence tests were not yet widely established in 1955 – Hodges & Lehman’s paper appeared in 1954). Low error rates, such as the conventions to use a 5% of 1% alpha level, are needed to draw conclusions that can enter the corpus of findings (even though some of these conclusions will turn out to be wrong, in the long run).

 

Why would we need conclusions?

 

One might reasonably wonder if we need conclusions in science. Tukey also ponders this question in Appendix 2. He writes “Science, in the broadest sense, is both one of the most successful of human affairs, and one of the most decentralized. In principle, each of us puts his evidence (his observations, experimental or not, and their discussion) before all the others, and in due course an adequate consensus of opinion develops.” He argues not for an epistemological reason, nor for a statistical reason, but for a sociological reason. Tukey writes: There are four types of difficulty, then, ranging from communication through assessment to mathematical treatment, each of which by itself will be sufficient, for a long time, to prevent the replacement, in science, of the system of conclusions by a system based more closely on today’s decision theory.” He notes how scientists can no longer get together in a single room (as was somewhat possible in the early decades of the Royal Society of London) to reach consensus about decisions. Therefore, they need to communicate conclusions, as “In order to replace conclusions as the basic means of communication, it would be necessary to rearrange and replan the entire fabric of science.” 

 

I hadn’t read Tukey’s paper when we wrote our preprint “The Epistemic and Pragmatic Function of Dichotomous Claims Based on Statistical Hypothesis Tests”. In this preprint, we also discuss a sociological reason for the presence of dichotomous claims in science. We also ask: “Would it be possible to organize science in a way that relies less on tests of competing theories to arrive at intersubjectively established facts about phenomena?” and similarly conclude: “Such alternative approaches seem feasible if stakeholders agree on the research questions that need to be investigated, and methods to be utilized, and coordinate their research efforts”.  We should add a citation to Tukey’s 1960 paper.

 

Is the goal of an study a conclusion, a decision, or both?

 

Tukey writes he “looks forward to the day when the history and status of tests of hypotheses will have been disentangled.” I think that in 2022 that day has not yet come. At the same time, Tukey admits in Appendix 1 that the two are sometimes intertwined.

 

A situation Tukey does not discuss, but that I think is especially difficult to disentangle, is a cumulative line of research. Although I would prefer to only build on an established corpus of findings, this is simply not possible. Not all conclusions in the current literature are reached with low error rates. This is true both for claims about the absence of an effect (which are rarely based on an equivalence test against a smallest effect size of interest with a low error rate), as for claims about the presence of an effect, not just because of p-hacking, but also because I might want to build on an exploratory finding from a previous study. In such cases, I would like to be able to concludethe effects I build on are established findings, but more often than not, I have to decide these effects are worth building on. The same holds for choices about the design of a set of studies in a research line. I might decide to include a factor in a subsequent study, or drop it. These decisions are based on conclusions with low error rates if I had the resources to collect large samples and perform replication studies, but other times they involve decisions about how to act in my next study with quite considerable risk.

 

We allow researchers to publish feasibility studies, pilot studies, and exploratory studies. We don’t require every study to be a Registered Report of Phase 3 trial. Not all information in the literature that we build on has been established with the rigor Tukey associates with conclusions. And the replication crisis has taught us that more conclusions from the past are later rejected than we might have thought based on the alpha levels reported in the original articles. And in some research areas, where data is scarce, we might need to accept that, if we want to learn anything, the conclusions will always more tentative (and the error rates accepted in individual studies will be higher) than in research areas where data is abundant.

 

Even if decisions and conclusions can not be completely disentangled, reflecting on their relative differences is very useful, as I think it can help us to clarify the goal we have when we collect data. 

 

For a 2013 blog post by Justin Esarey, who found the distinction a bit less useful than I found it, see https://polmeth.org/blog/scientific-conclusions-versus-scientific-decisions-or-we%E2%80%99re-having-tukey-thanksgiving

 

References

Frick, R. W. (1996). The appropriate use of null hypothesis testing. Psychological Methods, 1(4), 379–390. https://doi.org/10.1037/1082-989X.1.4.379

Tukey, J. W. (1960). Conclusions vs decisions. Technometrics, 2(4), 423–433.

Uygun Tunç, D., Tunç, M. N., & Lakens, D. (2021). The Epistemic and Pragmatic Function of Dichotomous Claims Based on Statistical Hypothesis Tests. PsyArXiv. https://doi.org/10.31234/osf.io/af9by

 

 

 

 

 

Not All Flexibility P-Hacking Is, Young Padawan

During a recent workshop on Sample Size Justification an early career researcher asked me: “You recommend sequential analysis in your paperfor when effect sizes are uncertain, where researchers collect data, analyze the data, stop when a test is significant, or continue data collection when a test is not significant, and, I don’t want to be rude, but isn’t this p-hacking?”

In linguistics there is a term for when children apply a rule they have learned to instances where it does not apply: Overregularization. They learn ‘one cow, two cows’, and use the +s rule for plural where it is not appropriate, such as ‘one mouse, two mouses’ (instead of ‘two mice’). The early career researcher who asked me if sequential analysis was a form of p-hacking was also overregularizing. We teach young researchers that flexibly analyzing data inflates error rates, is called p-hacking, and is a very bad thing that was one of the causes of the replication crisis. So, they apply the rule ‘flexibility in the data analysis is a bad thing’ to cases where it does not apply, such as in the case of sequential analyses. Yes, sequential analyses give a lot of flexibility to stop data collection, but it does so while carefully controlling error rates, with the added bonus that it can increase the efficiency of data collection. This makes it a good thing, not p-hacking.

 

Children increasingly use correct language the longer they are immersed in it. Many researchers are not yet immersed in an academic environment where they see flexibility in the data analysis applied correctly. Many are scared to do things wrong, which risks becoming overly conservative, as the pendulum from ‘we are all p-hacking without realizing the consequences’ swings back to far to ‘all flexibility is p-hacking’. Therefore, I patiently explain during workshops that flexibility is not bad per se, but that making claims without controlling your error rate is problematic.

In a recent podcast episode of ‘Quantitude’ one of the hosts shared a similar experience 5 minutes into the episode. A young student remarked that flexibility during the data analysis was ‘unethical’. The remainder of the podcast episode on ‘researcher degrees of freedom’ discussed how flexibility is part of data analysis. They clearly state that p-hacking is problematic, and opportunistic motivations to perform analyses that give you what you want to find should be constrained. But they then criticized preregistration in ways many people on Twitter disagreed with. They talk about ‘high priests’ who want to ‘stop bad people from doing bad things’ which they find uncomfortable, and say ‘you can not preregister every contingency’. They remark they would be surprised if data could be analyzed without requiring any on the fly judgment.

Although the examples they gave were not very good1 it is of course true that researchers sometimes need to deviate from an analysis plan. Deviating from an analysis plan is not p-hacking. But when people talk about preregistration, we often see overregularization: “Preregistration requires specifying your analysis plan to prevent inflation of the Type 1 error rate, so deviating from a preregistration is not allowed.” The whole point of preregistration is to transparently allow other researchers to evaluate the severity of a test, both when you stick to the preregistered statistical analysis plan, as when you deviate from it. Some researchers have sufficient experience with the research they do that they can preregister an analysis that does not require any deviations2, and then readers can see that the Type 1 error rate for the study is at the level specified before data collection. Other researchers will need to deviate from their analysis plan because they encounter unexpected data. Some deviations reduce the severity of the test by inflating the Type 1 error rate. But other deviations actually get you closer to the truth. We can not know which is which. A reader needs to form their own judgment about this.

A final example of overregularization comes from a person who discussed a new study that they were preregistering with a junior colleague. They mentioned the possibility of including a covariate in an analysis but thought that was too exploratory to be included in the preregistration. The junior colleague remarked: “But now that we have thought about the analysis, we need to preregister it”. Again, we see an example of overregularization. If you want to control the Type 1 error rate in a test, preregister it, and follow the preregistered statistical analysis plan. But researchers can, and should, explore data to generate hypotheses about things that are going on in their data. You can preregister these, but you do not have to. Not exploring data could even be seen as research waste, as you are missing out on the opportunity to generate hypotheses that are informed by data. A case can be made that researchers should regularly include variables to explore (e.g., measures that are of general interest to peers in their field), as long as these do not interfere with the primary hypothesis test (and as long as these explorations are presented as such).

In the book “Reporting quantitative research in psychology: How to meet APA Style Journal Article Reporting Standards” by Cooper and colleagues from 2020 a very useful distinction is made between primary hypotheses, secondary hypotheses, and exploratory hypotheses. The first consist of the main tests you are designing the study for. The secondary hypotheses are also of interest when you design the study – but you might not have sufficient power to detect them. You did not design the study to test these hypotheses, and because the power for these tests might be low, you did not control the Type 2 error rate for secondary hypotheses. You canpreregister secondary hypotheses to control the Type 1 error rate, as you know you will perform them, and if there are multiple secondary hypotheses, as Cooper et al (2020) remark, readers will expect “adjusted levels of statistical significance, or conservative post hoc means tests, when you conducted your secondary analysis”.

If you think of the possibility to analyze a covariate, but decide this is an exploratory analysis, you can decide to neither control the Type 1 error rate nor the Type 2 error rate. These are analyses, but not tests of a hypothesis, as any findings from these analyses have an unknown Type 1 error rate. Of course, that does not mean these analyses can not be correct in what they reveal – we just have no way to know the long run probability that exploratory conclusions are wrong. Future tests of the hypotheses generated in exploratory analyses are needed. But as long as you follow Journal Article Reporting Standards and distinguish exploratory analyses, readers know what the are getting. Exploring is not p-hacking.

People in psychology are re-learning the basic rules of hypothesis testing in the wake of the replication crisis. But because they are not yet immersed in good research practices, the lack of experience means they are overregularizing simplistic rules to situations where they do not apply. Not all flexibility is p-hacking, preregistered studies do not prevent you from deviating from your analysis plan, and you do not need to preregister every possible test that you think of. A good cure for overregularization is reasoning from basic principles. Do not follow simple rules (or what you see in published articles) but make decisions based on an understanding of how to achieve your inferential goal. If the goal is to make claims with controlled error rates, prevent Type 1 error inflation, for example by correcting the alpha level where needed. If your goal is to explore data, feel free to do so, but know these explorations should be reported as such. When you design a study, follow the Journal Article Reporting Standards and distinguish tests with different inferential goals.

 

1 E.g., they discuss having to choose between Student’s t-test and Welch’s t-test, depending on wheter Levene’s test indicates the assumption of homogeneity is violated, which is not best practice – just follow R, and use Welch’s t-test by default.

2 But this is rare – only 2 out of 27 preregistered studies in Psychological Science made no deviations. https://royalsocietypublishing.org/doi/full/10.1098/rsos.211037We can probably do a bit better if we only preregistered predictions at a time where we really understand our manipulations and measures.

The Red Team Challenge (Part 3): Is it Feasible in Practice?

By Daniel Lakens & Leo Tiokhin

Also read Part 1 and Part 2 in this series on our Red Team Challenge.


Six weeks ago, we launched the Red Team Challenge: a feasibility study to see whether it could be worthwhile to pay people to find errors in scientific research. In our project, we wanted to see to what extent a “Red Team” – people hired to criticize a scientific study with the goal to improve it – would improve the quality of the resulting scientific work.

Currently, the way that error detection works in science is a bit peculiar. Papers go through the peer-review process and get the peer-reviewed “stamp of approval”. Then, upon publication, some of these same papers receive immediate and widespread criticism. Sometimes this even leads to formal corrections or retractions. And this happens even at some of the most prestigious scientific journals.

So, it seems that our current mechanisms of scientific quality control leave something to be desired. Nicholas Coles, Ruben Arslan, and the authors of this post (Leo Tiokhin and Daniël Lakens) were interested in whether Red Teams might be one way to improve quality control in science.

Ideally, a Red Team joins a research project from the start and criticizes each step of the process. However, doing this would have taken the duration of an entire study. At the time, it also seemed a bit premature — we didn’t know whether anyone would be interested in a Red Team approach, how it would work in practice, and so on. So, instead, Nicholas Coles, Brooke Frohlich, Jeff Larsen, and Lowell Gaertner volunteered one of their manuscripts (a completed study that they were ready to submit for publication). We put out a call on Twitter, Facebook, and the 20% Statistician blog, and 22 people expressed interest. On May 15th, we randomly selected five volunteers based on five areas of expertise: Åse Innes-Ker (affective science), Nicholas James (design/methods), Ingrid Aulike (statistics), Melissa Kline (computational reproducibility), and Tiago Lubiana (wildcard category). The Red Team was then given three weeks to report errors.

Our Red Team project was somewhat similar to traditional peer review, except that we 1) compensated Red Team members’ time with a $200 stipend, 2) explicitly asked the Red Teamers to identify errors in any part of the project (i.e., not just writing), 3) gave the Red Team full access to the materials, data, and code, and 4) provided financial incentives for identifying critical errors (a donation to the GiveWell charity non-profit for each unique “critical error” discovered).

The Red Team submitted 107 error reports. Ruben Arslan–who helped inspire this project with his BugBountyProgram–served as the neutral arbiter. Ruben examined the reports, evaluated the authors’ responses, and ultimately decided whether an issue was “critical” (see this post for Ruben’s reflection on the Red Team Challenge) Of the 107 reports, Ruben concluded that there were 18 unique critical issues (for details, see this project page). Ruben decided that any major issues that potentially invalidated inferences were worth $100, minor issues related to computational reproducibility were worth $20, and minor issues that could be resolved without much work were worth $10. After three weeks, the total final donation was $660. The Red Team detected 5 major errors. These included two previously unknown limitations of a key manipulation, inadequacies in the design and description of the power analysis, an incorrectly reported statistical test in the supplemental materials, and a lack of information about the sample in the manuscript. Minor issues concerned reproducibility of code and clarifications about the procedure.



After receiving this feedback, Nicholas Coles and his co-authors decided to hold off submitting their manuscript (see this post for Nicholas’ personal reflection). They are currently conducting a new study to address some of the issues raised by the Red Team.

We consider this to be a feasibility study of whether a Red Team approach is practical and worthwhile. So, based on this study, we shouldn’t draw any conclusions about a Red Team approach in science except one: it can be done.

That said, our study does provide some food for thought. Many people were eager to join the Red Team. The study’s corresponding author, Nicholas Coles, was graciously willing to acknowledge issues when they were pointed out. And it was obvious that, had these issues been pointed out earlier, the study would have been substantially improved before being carried out. These findings make us optimistic that Red Teams can be useful and feasible to implement.

In an earlier column, the issue was raised that rewarding Red Team members with co-authorship on the subsequent paper would create a conflict of interest — too severe criticism on the paper might make the paper unpublishable. So, instead, we paid each Red Teamer $200 for their service. We wanted to reward people for their time. We did not want to reward them only for finding issues because, before we knew that 19 unique issues would be found, we were naively worried that the Red Team might find few things wrong with the paper. In interviews with Red Team members, it became clear that the charitable donations for each issue were not a strong motivator. Instead, people were just happy to detect issues for decent pay. They didn’t think that they deserved authorship for their work, and several Red Team members didn’t consider authorship on an academic paper to be valuable, given their career goals.

After talking with the Red Team members, we started to think that certain people might enjoy Red Teaming as a job – it is challenging, requires skills, and improves science. This opens up the possibility of a freelance services marketplace (such as Fiverr) for error detection, where Red Team members are hired at an hourly rate and potentially rewarded for finding errors. It should be feasible to hire people to check for errors at each phase of a project, depending on their expertise and reputation as good error-detectors. If researchers do not have money for such a service, they might be able to set up a volunteer network where people “Red Team” each other’s projects. It could also be possible for universities to create Red Teams (e.g., Cornell University has a computational reproducibility service that researchers can hire).

As scientists, we should ask ourselves when, and for which type of studies, we want to invest time and/or money to make sure that published work is as free from errors as possible. As we continue to consider ways to increase the reliability of science, a Red Team approach might be something to further explore.

What’s a family in family-wise error control?

When you perform multiple comparisons in a study, you need to control your alpha level for multiple comparisons. It is generally recommended to control for the family-wise error rate, but there is some confusion about what a ‘family’ is. As Bretz, Hothorn, & Westfall (2011) write in their excellent book “Multiple Comparisons Using R” on page 15: “The appropriate choice of null hypotheses being of primary interest is a controversial question. That is, it is not always clear which set of hypotheses should constitute the family H1,…,Hm. This topic has often been in dispute and there is no general consensus.” In one of the best papers on controlling for multiple comparisons out there, Bender & Lange (2001) write: “Unfortunately, there is no simple and unique answer to when it is appropriate to control which error rate. Different persons may have different but nevertheless reasonable opinions. In addition to the problem of deciding which error rate should be under control, it has to be defined first which tests of a study belong to one experiment.” The Wikipedia page on family-wise error rate is a mess.

I will be honest: I have never understood this confusion about what a family of tests is when controlling the family-wise error rate. At least not in a Neyman-Pearson approach to hypothesis testing, where the goal is to use data to make decisions about how to act. Neyman (Neyman, 1957) calls his approach inductive behavior. The outcome of an experiment leads one to take different possible actions, which can be either practical (e.g., implement a new procedure, abandon a research line) or scientific (e.g., claim there is or is no effect). From an error-statistical approach (Mayo, 2018) inflated Type 1 error rates mean that it has become very likely that you will be able to claim support for your hypothesis, even when the hypothesis is wrong. This reduces the severity of the test. To prevent this, we need to control our error rate at the level of our claim.
One reason the issue of family-wise error rates might remain vague, is that researchers are often vague about their claims. We do not specify our hypotheses unambiguously, and therefore this issue remains unclear. To be honest, I suspect another reason there is a continuing debate about whether and how to lower the alpha level to control for multiple comparisons in some disciplines is that 1) there are a surprisingly large number of papers written on this topic that argue you do not need to control for multiple comparisons, which are 2) cited a huge number of times giving rise to the feeling that surely they must have a point. Regrettably, the main reason these papers are written is because there are people who don’t think a Neyman-Pearson approach to hypothesis testing is a good idea, and the main reason these papers are cited is because doing so is convenient for researchers who want to publish statistically significant results, as they can justify why they are not lowering their alpha level, making that p = 0.02 in one of three tests really ‘significant’. All papers that argue against the need to control for multiple comparisons when testing hypotheses are wrong.  Yes, their existence and massive citation counts frustrate me. It is fine not to test a hypothesis, but when you do, and you make a claim based on a test, you need to control your error rates. 

But let’s get back to our first problem, which we can solve by making the claims people need to control Type 1 error rates for less vague. Lisa DeBruine and I recently proposed machine readable hypothesis tests to remove any ambiguity in the tests we will perform to examine statistical predictions, and when we will consider a claim corroborated or falsified. In this post, I am going to use our R package ‘scienceverse’ to clarify what constitutes a family of tests when controlling the family-wise error rate.

An example of formalizing family-wise error control

Let’s assume we collect data from 100 participants in a control and treatment condition. We collect 3 dependent variables (dv1, dv2, and dv3). In the population there is no difference between groups on any of these three variables (the true effect size is 0). We will analyze the three dv’s in independent t-tests. This requires specifying our alpha level, and thus deciding whether we need to correct for multiple comparisons. How we control error rates depends on claim we want to make.
We might want to act as if (or claim that) our treatment works if there is a difference between the treatment and control conditions on any of the three variables. In scienceverse terms, this means we consider the prediction corroborated when the p-value of the first t-test is smaller than alpha level, the p-value of the second t-test is smaller than the alpha level, or the p-value of the first t-test is smaller than the alpha level. In the scienceverse code, we specify a criterion for each test (a p-value smaller than the alpha level, p.value < alpha_level) and conclude the hypothesis is corroborated if either of these criteria are met (“p_t_1 | p_t_2 | p_t_3”).  
We could also want to make three different predictions. Instead of one hypothesis (“something will happen”) we have three different hypotheses, and predict there will be an effect on dv1, dv2, and dv3. The criterion for each t-test is the same, but we now have three hypotheses to evaluate (H1, H2, and H3). Each of these claims can be corroborated, or not.
Scienceverse allows you to specify your hypotheses tests unambiguously (for code used in this blog, see the bottom of the post). It also allows you to simulate a dataset, which we can use to examine Type 1 errors by simulating data where no true effects exist. Finally, scienceverse allows you to run the pre-specified analyses on the (simulated) data, and will automatically create a report that summarizes which hypotheses were corroborated (which is useful when checking if the conclusions in a manuscript indeed follow from the preregistered analyses, or not). The output a single simulated dataset for the scenario where we will interpret any effect on the three dv’s as support for the hypothesis looks like this:

Evaluation of Statistical Hypotheses

12 March, 2020

Simulating Null Effects Postregistration

Results

Hypothesis 1: H1

Something will happen

  • p_t_1 is confirmed if analysis ttest_1 yields p.value<0.05

    The result was p.value = 0.452 (FALSE)

  • p_t_2 is confirmed if analysis ttest_2 yields p.value<0.05

    The result was p.value = 0.21 (FALSE)

  • p_t_3 is confirmed if analysis ttest_3 yields p.value<0.05

    The result was p.value = 0.02 (TRUE)

Corroboration ( TRUE )

The hypothesis is corroborated if anything is significant.

 p_t_1 | p_t_2 | p_t_3 

Falsification ( FALSE )

The hypothesis is falsified if nothing is significant.

 !p_t_1 & !p_t_2 & !p_t_3 

All criteria were met for corroboration.

We see the hypothesis that ‘something will happen’ is corroborated, because there was a significant difference on dv3 – even though this was a Type 1 error, since we simulated data with a true effect size of 0 – and any difference was taken as support for the prediction. With a 5% alpha level, we will observe 1-(1-0.05)^3 = 14.26% Type 1 errors in the long run. This Type 1 error inflation can be prevented by lowering the alpha level, for example by a Bonferroni correction (0.05/3), after which the expected Type 1 error rate is 4.92% (see Bretz et al., 2011, for more advanced techniques to control error rates). When we examine the report for the second scenario, where each dv tests a unique hypothesis, we get the following output from scienceverse:

Evaluation of Statistical Hypotheses

12 March, 2020

Simulating Null Effects Postregistration

Results

Hypothesis 1: H1

dv1 will show an effect

  • p_t_1 is confirmed if analysis ttest_1 yields p.value<0.05

    The result was p.value = 0.452 (FALSE)

Corroboration ( FALSE )

The hypothesis is corroborated if dv1 is significant.

 p_t_1 

Falsification ( TRUE )

The hypothesis is falsified if dv1 is not significant.

 !p_t_1 

All criteria were met for falsification.

Hypothesis 2: H2

dv2 will show an effect

  • p_t_2 is confirmed if analysis ttest_2 yields p.value<0.05

    The result was p.value = 0.21 (FALSE)

Corroboration ( FALSE )

The hypothesis is corroborated if dv2 is significant.

 p_t_2 

Falsification ( TRUE )

The hypothesis is falsified if dv2 is not significant.

 !p_t_2 

All criteria were met for falsification.

Hypothesis 3: H3

dv3 will show an effect

  • p_t_3 is confirmed if analysis ttest_3 yields p.value<0.05

    The result was p.value = 0.02 (TRUE)

Corroboration ( TRUE )

The hypothesis is corroborated if dv3 is significant.

 p_t_3 

Falsification ( FALSE )

The hypothesis is falsified if dv3 is not significant.

 !p_t_3 

All criteria were met for corroboration.

We now see that two hypotheses were falsified (yes, yes, I know you should not use p > 0.05 to falsify a prediction in real life, and this part of the example is formally wrong so I don’t also have to explain equivalence testing to readers not familiar with it – if that is you, read this, and know scienceverse will allow you to specify equivalence test as the criterion to falsify a prediction, see the example here). The third hypothesis is corroborated, even though, as above, this is a Type 1 error.

It might seem that the second approach, specifying each dv as it’s own hypothesis, is the way to go if you do not want to lower the alpha level to control for multiple comparisons. But take a look at the report of the study you have performed. You have made 3 predictions, of which 1 was corroborated. That is not an impressive success rate. Sure, mixed results happen, and you should interpret results not just based on the p-value (but on the strength of the experimental design, assumptions about power, your prior, the strength of the theory, etc.) but if these predictions were derived from the same theory, this set of results is not particularly impressive. Since researchers can never selectively report only those results that ‘work’ because this would be a violation of the code of research integrity, we should always be able to see the meager track record of predictions.If you don’t feel ready to make a specific predictions (and run the risk of sullying your track record) either do unplanned exploratory tests, and do not make claims based on their results, or preregister all possible tests you can think of, and massively lower your alpha level to control error rates (for example, genome-wide association studies sometimes use an alpha level of 5 x 10–8 to control the Type 1 erorr rate).

Hopefully, specifying our hypotheses (and what would corroborate them) transparently by using scienceverse makes it clear what happens in the long run in both scenarios. In the long run, both the first scenario, if we would use an alpha level of 0.05/3 instead of 0.05, and the second scenario, with an alpha level of 0.05 for each individual hypothesis, will lead to the same end result: Not more than 5% of our claims will be wrong, if the null hypothesis is true. In the first scenario, we are making one claim in an experiment, and in the second we make three. In the second scenario we will end up with more false claims in an absolute sense, but the relative number of false claims is the same in both scenarios. And that’s exactly the goal of family-wise error control.
References
Bender, R., & Lange, S. (2001). Adjusting for multiple testing—When and how? Journal of Clinical Epidemiology, 54(4), 343–349.
Bretz, F., Hothorn, T., & Westfall, P. H. (2011). Multiple comparisons using R. CRC Press.
Mayo, D. G. (2018). Statistical inference as severe testing: How to get beyond the statistics wars. Cambridge University Press.
Neyman, J. (1957). “Inductive Behavior” as a Basic Concept of Philosophy of Science. Revue de l’Institut International de Statistique / Review of the International Statistical Institute, 25(1/3), 7. https://doi.org/10.2307/1401671

Thanks to Lisa DeBruine for feedback on an earlier draft of this blog post.


Observed Type 1 Error Rates (Why Statistical Models are Not Reality)

“In the long run we are all dead.” – John Maynard Keynes
When we perform hypothesis tests in a Neyman-Pearson framework we want to make decisions while controlling the rate at which we make errors. We do this in part by setting an alpha level that guarantees we will not say there is an effect when there is no effect more than ?% of the time, in the long run.
I like my statistics applied. And in practice I don’t do an infinite number of studies. As Keynes astutely observed, I will be dead before then. So when I control the error rate for my studies, what is a realistic Type 1 error rate I will observe in the ‘somewhat longer run’?
Let’s assume you publish a paper that contains only a single p-value. Let’s also assume the true effect size is 0, so the null hypothesis is true. Your test will return a p-value smaller than your alpha level (and this would be a Type 1 error) or not. With a single study, you don’t have the granularity to talk about a 5% error rate.

In experimental psychology 30 seems to be a reasonable average for the number of p-values that are reported in a single paper (http://doi.org/10.1371/journal.pone.0127872). Let’s assume you perform 30 tests in a single paper and every time the null is true (even though this is often unlikely in a real paper). In the long run, with an alpha level of 0.05 we can expect that 30 * 0.05 = 1.5 p-values will be significant. But in real sets of 30 p-values there is no half of a p-value, so you will either observe 0, 1, 2, 3, 4, 5, or even more Type 1 errors, which equals 0%, 3.33%, 6.66%, 10%, 13.33%, 16.66%, or even more. We can plot the frequency of Type 1 error rates for 1 million sets of 30 tests.

Each of these error rates occurs with a certain frequency. 21.5% of the time, you will not make any Type 1 errors. 12.7% of the time, you will make 3 Type 1 errors in 30 tests. The average over thousands of papers reporting 30 tests will be a Type 1 error rate of 5%, but no single set of studies is average.

Now maybe a single paper with 30 tests is not ‘long runnerish’ enough. What we really want to control the Type 1 error rate of is the literature, past, present, and future. Except, we will never read the literature. So let’s assume we are interested in a meta-analysis worth of 200 studies that examine a topic where the true effect size is 0 for each test. We can plot the frequency of Type 1 error rates for 1 million sets of 200 tests.
 


Now things start to look a bit more like what you would expect. The Type 1 error rate you will observe in your set of 200 tests is close to 5%. However, it is almost exactly as likely that the observed Type 1 error rate is 4.5%. 90% of the distribution of observed alpha levels will lie between 0.025 and 0.075. So, even in ‘somewhat longrunnish’ 200 tests, the observed Type 1 error rate will rarely be exactly 5%, and it might be more useful to think about it as being between 2.5 and 7.5%.

Statistical models are not reality.

A 5% error rate exists only in the abstract world of infinite repetitions, and you will not live long enough to perform an infinite number of studies. In practice, if you (or a group of researchers examining a specific question) do real research, the error rates are somewhere in the range of 5%. Everything has variation in samples drawn from a larger population – error rates are no exception.
When we quantify things, there is the tendency to get lost in digits. But in practice, the levels of random noise we can reasonable expect quickly overwhelms everything at least 3 digits after the decimal. I know we can compute the alpha level after a Pocock correction for two looks at the data in sequential analyses as 0.0294. But this is not the level of granularity that we should have in mind when we think of the error rate we will observe in real lines of research. When we control our error rates, we do so with the goal to end up somewhere reasonably low, after a decent number of hypotheses have been tested. Whether we end up observing 2.5% Type 1 errors or 7.5% errors: Potato, patato.
This does not mean we should stop quantifying numbers precisely when they can be quantified precisely, but we should realize what we get from the statistical procedures we use. We don’t get a 5% Type 1 error rate in any real set of studies we will actually perform. Statistical inferences guide us roughly to where we would ideally like to end up. By all means calculate exact numbers where you can. Strictly adhere to hard thresholds to prevent you from fooling yourself too often. But maybe in 2020 we can learn to appreciate statistical inferences are always a bit messy. Do the best you reasonably can, but don’t expect perfection. In 2020, and in statistics.

Code
For a related paper on alpha levels that in practical situations can not be 5%, see https://psyarxiv.com/erwvk/ by Casper Albers. 

Justify Your Alpha by Minimizing or Balancing Error Rates

A preprint (“Justify Your Alpha: A Primer on Two Practical Approaches”) that extends the ideas in this blog post is available at: https://psyarxiv.com/ts4r6

In 1957 Neyman wrote: “it appears desirable to determine the level of significance in accordance with quite a few circumstances that vary from one particular problem to the next.” Despite this good advice, social scientists developed the norm to always use an alpha level of 0.05 as a threshold when making predictions. In this blog post I will explain how you can set the alpha level so that it minimizes the combined Type 1 and Type 2 error rates (thus efficiently making decisions), or balance Type 1 and Type 2 error rates. You can use this approach to justify your alpha level, and guide your thoughts about how to design studies more efficiently.

Neyman (1933) provides an example of the reasoning process he believed researchers should go through. He explains how a researcher might have derived an important hypothesis that H0 is true (there is no effect), and will not want to ‘throw it aside too lightly’. The researcher would choose a ow alpha level (e.g.,  0.01). In another line of research, an experimenter might be interesting in detecting factors that would lead to the modification of a standard law, where the “importance of finding some new line of development here outweighs any loss due to a certain waste of effort in starting on a false trail”, and Neyman suggests to set the alpha level to for example 0.1.

Which is worse? A Type 1 Error or a Type 2 Error?

As you perform lines of research the data you collect are used as a guide to continue or abandon a hypothesis, to use one paradigm or another. One goal of well-designed experiments is to control the error rates as you make these decisions, so that you do not fool yourself too often in the long run.

Many researchers implicitly assume that Type 1 errors are more problematic than Type 2 errors. Cohen (1988) suggested a Type 2 error rate of 20%, and hence to aim for 80% power, but wrote “.20 is chosen with the idea that the general relative seriousness of these two kinds of errors is of the order of .20/.05, i.e., that Type I errors are of the order of four times as serious as Type II errors. This .80 desired power convention is offered with the hope that it will be ignored whenever an investigator can find a basis in his substantive concerns in his specific research investigation to choose a value ad hoc”. More recently, researchers have argued that false negative constitute a much more serious problem in science (Fiedler, Kutzner, & Krueger, 2012). I always ask my 3rd year bachelor students: What do you think? Is a Type 1 error in your next study worse than a Type 2 error?

Last year I listened to someone who decided whether new therapies would be covered by the German healthcare system. She discussed Eye Movement Desensitization and Reprocessing (EMDR) therapy. I knew that the evidence that the therapy worked was very weak. As the talk started, I hoped they had decided not to cover EMDR. They did, and the researcher convinced me this was a good decision. She said that, although no strong enough evidence was available that it works, the costs of the therapy (which can be done behind a computer) are very low, it was applied in settings where no really good alternatives were available (e.g., inside prisons), and risk of negative consequences was basically zero. They were aware of the fact that there was a very high probability that EMDR was a Type 1 error, but compared to the cost of a Type 2 error, it was still better to accept the treatment. Another of my favorite examples comes from Field et al. (2004) who perform a cost-benefit analysis on whether to intervene when examining if a koala population is declining, and show the alpha should be set at 1 (one should always assume a decline is occurring and intervene). 

Making these decisions is difficult – but it is better to think about them, then to end up with error rates that do not reflect the errors you actually want to make. As Ulrich and Miller (2019) describe, the long run error rates you actually make depend on several unknown factors, such as the true effect size, and the prior probability that the null hypothesis is true. Despite these unknowns, you can design studies that have good error rates for an effect size you are interested in, given some sample size you are planning to collect. Let’s see how.

Balancing or minimizing error rates

Mudge, Baker, Edge, and Houlahan (2012) explain how researchers might want to minimize the total combined error rate. If both Type 1 as Type 2 errors are costly, then it makes sense to optimally reduce both errors as you do studies. This would make decision making overall most efficient. You choose an alpha level that, when used in the power analysis, leads to the lowest combined error rate. For example, with a 5% alpha and 80% power, the combined error rate is 5+20 = 25%, and if power is 99% and the alpha is 5% the combined error rate is 1 + 5 = 6%. Mudge and colleagues show that the increasing or reducing the alpha level can lower the combined error rate. This is one of the approaches we mentioned in our ‘Justify Your Alpha’ paper from 2018.

When we wrote ‘Justify Your Alpha’ we knew it would be a lot of work to actually develop methods that people can use. For months, I would occasionally revisit the code Mudge and colleagues used in their paper, which is an adaptation of the pwr library in R, but the code was too complex and I could not get to the bottom of how it worked. After leaving this aside for some months, during which I improved my R skills, some days ago I took a long shower and suddenly realized that I did not need to understand the code by Mudge and colleagues. Instead of getting their code to work, I could write my own code from scratch. Such realizations are my justification for taking showers that are longer than is environmentally friendly.

If you want to balance or minimize error rates, the tricky thing is that the alpha level you set determines the Type 1 error rate, but through it’s influence on the statistical power, also influenced the Type 2 error rate. So I wrote a function that examines the range of possible alpha levels (from 0 to 1) and minimizes either the total error (Type 1 + Type 2) or minimizes the difference between the Type 1 and Type 2 error rates, balancing the error rates. It then returns the alpha (Type 1 error rate) and the beta (Type 2 error). You can enter any analytic power function that normally works in R and would output the calculated power.

Minimizing Error Rates

Below is the version of the optimal_alpha function used in this blog. Yes, I am defining a function inside another function and this could all look a lot prettier – but it works for now. I plan to clean up the code when I archive my blog posts on how to justify alpha level in a journal, and will make an R package when I do.


The code requires requires you to specify the power function (in a way that the code returns the power, hence the $power at the end) for your test, where the significance level is a variable ‘x’. In this power function you specify the effect size (such as the smallest effect size you are interested in) and the sample size. In my experience, sometimes the sample size is determined by factors outside the control of the researcher. For example, you are working with a existing data, or you are studying a sample size that is limited (e.g., all students in a school). Other times, people have a maximum sample size they can feasibly collect, and accept the error rates that follow from this feasibility limitation. If your sample size is not limited, you can increase the sample size until you are happy with the error rates.

The code calculates the Type 2 error (1-power) across a range of alpha values. For example, we want to calculate the optimal alpha level for a independent t-test. Assume our smallest effect size of interest is d = 0.5, and we are planning to collect 100 participants in each group. We would normally calculate power as follows:

pwr.t.test(d = 0.5, n = 100, sig.level = 0.05, type = ‘two.sample’, alternative = ‘two.sided’)$power

This analysis tells us that we have 94% power with a 5% alpha level for our smallest effect size of interest, d = 0.5, when we collect 100 participants in each condition.

If we want to minimize our total error rates, we would enter this function in our optimal_alpha function (while replacing the sig.level argument with ‘x’ instead of 0.05, because we are varying the value to determine the lowest combined error rate).

res = optimal_alpha(power_function = pwr.t.test(d=0.5, n=100, sig.level = x, type=’two.sample’, alternative=’two.sided’)$power”)

res$alpha
## [1] 0.05101728
res$beta
## [1] 0.05853977

We see that an alpha level of 0.051 slightly improved the combined error rate, since it will lead to a Type 2 error rate of 0.059 for a smallest effect size of interest of d = 0.5. The combined error rate is 0.11. For comparison, lowering the alpha level to 0.005 would lead to a much larger combined error rate of 0.25.
What would happen if we had decided to collect 200 participants per group, or only 50? With 200 participants per group we would have more than 99% power for d = 0.05, and relatively speaking, a 5% Type 1 error with a 1% Type 2 error is slightly out of balance. In the age of big data, we nevertheless researchers use such suboptimal error rates this all the time due to their mindless choice for an alpha level of 0.05. When power is large the combined error rates can be smaller if the alpha level is lowered. If we just replace 100 by 200 in the function above, we see the combined Type 1 and Type 2 error rate is the lowest if we set the alpha level to 0.00866. If you collect large amounts of data, you should really consider lowering your alpha level.

If the maximum sample size we were willing to collect was 50 per group, the optimal alpha level to reduce the combined Type 1 and Type 2 error rates is 0.13. This means that we would have a 13% probability of deciding there is an effect when the null hypothesis is true. This is quite high! However, if we had used a 5% Type 1 error rate, the power would have been 69.69%, with a 30.31% Type 2 error rate, while the Type 2 error rate is ‘only’ 16.56% after increasing the alpha level to 0.13. We increase the Type 1 error rate by 8%, to reduce the Type 2 error rate by 13.5%. This increases the overall efficiency of the decisions we make.

This example relies on the pwr.t.test function in R, but any power function can be used. For example, the code to minimize the combined error rates for the power analysis for an equivalence test would be:

res = optimal_alpha(power_function = “powerTOSTtwo(alpha=x, N=200, low_eqbound_d=-0.4, high_eqbound_d=0.4)”)

Balancing Error Rates

You can choose to minimize the combined error rates, but you can also decide that it makes most sense to you to balance the error rates. For example, you think a Type 1 error is just as problematic as a Type 2 error, and therefore, you want to design a study that has balanced error rates for a smallest effect size of interest (e.g., a 5% Type 1 error rate and a 5% Type 2 error rate). Whether to minimize error rates or balance them can be specified in an additional argument in the function. The default it to minimize, but by adding error = “balance” an alpha level is given so that the Type 1 error rate equals the Type 2 error rate.

res = optimal_alpha(power_function = “pwr.t.test(d=0.5, n=100, sig.level = x, type=’two.sample’, alternative=’two.sided’)$power”, error = “balance”)

res$alpha
## [1] 0.05488516
res$beta
## [1] 0.05488402

Repeating our earlier example, the alpha level is 0.055, such that the Type 2 error rate, given the smallest effect size of interest and the and the sample size, is also 0.055. I feel that even though this does not minimize the overall error rates, it is a justification strategy for your alpha level that often makes sense. If both Type 1 and Type 2 errors are equally problematic, we design a study where we are just as likely to make either mistake, for the effect size we care about.

Relative costs and prior probabilities

So far we have assumed a Type 1 error and Type 2 error are equally problematic. But you might believe Cohen (1988) was right, and Type 1 errors are exactly 4 times as bad as Type 2 errors. Or you might think they are twice as problematic, or 10 times as problematic. However you weigh them, as explained by Mudge et al., 2012, and Ulrich & Miller, 2019, you should incorporate those weights into your decisions.

The function has another optional argument, costT1T2, that allows you to specify the relative cost of Type1:Type2 errors. By default this is set to 1, but you can set it to 4 (or any other value) such that Type 1 errors are 4 times as costly as Type 2 errors. This will change the weight of Type 1 errors compared to Type 2 errors, and thus also the choice of the best alpha level.

res = optimal_alpha(power_function = “pwr.t.test(d=0.5, n=100, sig.level = x, type=’two.sample’, alternative=’two.sided’)$power”, error = “minimal”, costT1T2 = 4)

res$alpha
## [1] 0.01918735
res$beta
## [1] 0.1211773

Now, the alpha level that minimized the weighted Type 1 and Type 2 error rates is 0.019.

Similarly, you can take into account prior probabilities that either the null is true (and you will observe a Type 1 error), or that the alternative hypothesis is true (and you will observe a Type 2 error). By incorporating these expectations, you can minimize or balance error rates in the long run (assuming your priors are correct). Priors can be specified using the prior_H1H0 argument, which by default is 1 (H1 and H0 are equally likely). Setting it to 4 means you think the alternative hypothesis (and hence, Type 2 errors) are 4 times more likely than that the null hypothesis (and Type 1 errors).

res = optimal_alpha(power_function = “pwr.t.test(d=0.5, n=100, sig.level = x, type=’two.sample’, alternative=’two.sided’)$power”, error = “minimal”, prior_H1H0 = 2)

res$alpha
## [1] 0.07901679
res$beta
## [1] 0.03875676

If you think H1 is four times more likely to be true than H0, you need to worry less about Type 1 errors, and now the alpha that minimizes the weighted error rates is 0.079. It is always difficult to decide upon priors (unless you are Omniscient Jones) but even if you ignore them, you are making the decision that H1 and H0 are equally plausible.

Conclusion

You can’t abandon a practice without an alternative. Minimizing the combined error rate, or balancing error rates, provide two alternative approaches to the normative practice of setting the alpha level to 5%. Together with the approach to reduce the alpha level as a function of the sample size, I invite you to explore ways to set error rates based on something else than convention. A downside of abandoning mindless statistics is that you need to think of difficult questions. How much more negative is a Type 1 error than a Type 2 error? Do you have an ideas about the prior probabilities? And what is the smallest effect size of interest? Answering these questions is difficult, but considering them is important for any study you design. The experiments you make might very well be more informative, and more efficient. So give it a try.

References
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed). Hillsdale, N.J: L. Erlbaum Associates.
Fiedler, K., Kutzner, F., & Krueger, J. I. (2012). The Long Way From ?-Error Control to Validity Proper: Problems With a Short-Sighted False-Positive Debate. Perspectives on Psychological Science, 7(6), 661–669. https://doi.org/10.1177/1745691612462587
Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168–171. https://doi.org/10.1038/s41562-018-0311-x
 Miller, J., & Ulrich, R. (2019). The quest for an optimal alpha. PLOS ONE, 14(1), e0208631. https://doi.org/10.1371/journal.pone.0208631 
Mudge, J. F., Baker, L. F., Edge, C. B., & Houlahan, J. E. (2012). Setting an Optimal ? That Minimizes Errors in Null Hypothesis Significance Tests. PLOS ONE, 7(2), e32734. https://doi.org/10.1371/journal.pone.0032734

Justify Your Alpha by Decreasing Alpha Levels as a Function of the Sample Size

A preprint (“Justify Your Alpha: A Primer on Two Practical Approaches”) that extends and improves the ideas in this blog post is available at: https://psyarxiv.com/ts4r6  
 
Testing whether observed data should surprise us, under the assumption that some model of the data is true, is a widely used procedure in psychological science. Tests against a null model, or against the smallest effect size of interest for an equivalence test, can guide your decisions to continue or abandon research lines. Seeing whether a p-value is smaller than an alpha level is rarely the only thing you want to do, but especially early on in experimental research lines where you can randomly assign participants to conditions, it can be a useful thing.

Regrettably, this procedure is performed rather mindlessly. Doing Neyman-Pearson hypothesis testing well, you should carefully think about the error rates you find acceptable. How often do you want to miss the smallest effect size you care about, if it is really there? And how often do you want to say there is an effect, but actually be wrong? It is important to justify your error rates when designing an experiment. In this post I will provide one justification for setting the alpha level (something we recommended makes more sense than using a fixed alpha level).

Papers explaining how to justify your alpha level are very rare (for an example, see Mudge, Baker, Edge, & Houlahan, 2012). Here I want to discuss one of the least known, but easiest suggestions on how to justify alpha levels in the literature, proposed by Good. The idea is simple, and has been supported by many statisticians in the last 80 years: Lower the alpha level as a function of your sample size.

The idea behind this recommendation is most extensively discussed in a book by Leamer (1978, p. 92). He writes:

The rule of thumb quite popular now, that is, setting the significance level arbitrarily to .05, is shown to be deficient in the sense that from every reasonable viewpoint the significance level should be a decreasing function of sample size.

Leamer (you can download his book for free) correctly notes that this behavior, an alpha level that is a decreasing function of the sample size, makes sense from both a Bayesian as a Neyman-Pearson perspective. Let me explain.

Imagine a researcher who performs a study that has 99.9% power to detect the smallest effect size the researcher is interested in, based on a test with an alpha level of 0.05. Such a study also has 99.8% power when using an alpha level of 0.03. Feel free to follow along here, by setting the sample size to 204, the effect size to 0.5, alpha or p-value (upper limit) to 0.05, and the p-value (lower limit) to 0.03.

We see that if the alternative hypothesis is true only 0.1% of the observed studies will, in the long run, observe a p-value between 0.03 and 0.05. When the null-hypothesis is true 2% of the studies will, in the long run, observe a p-value between 0.03 and 0.05. Note how this makes p-values between 0.03 and 0.05 more likely when there is no true effect, than when there is an effect. This is known as Lindley’s paradox (and I explain this in more detail in Assignment 1 in my MOOC, which you can also do here).

Although you can argue that you are still making a Type 1 error at most 5% of the time in the above situation, I think it makes sense to acknowledge there is something weird about having a Type 1 error of 5% when you have a Type 2 error of 0.1% (again, see Mudge, Baker, Edge, & Houlahan, 2012, who suggest balancing error rates). To me, it makes sense to design a study where error rates are more balanced, and a significant effect is declared for p-values more likely to occur when the alternative model is true than when the null model is true.

Because power increases as the sample size increases, and because Lindley’s paradox (Lindley, 1957, see also Cousins, 2017) can be prevented by lowering the alpha level sufficiently, the idea to lower the significance level as a function of the sample is very reasonable. But how?

Zellner (1971) discusses how the critical value for a frequentist hypothesis test approaches a limit as the sample size increases (i.e., a critical value of 1.96 for p = 0.05 in a two-sided test) whereas the critical value for a Bayes factor increases as the sample size increases (see also Rouder, Speckman, Sun, Morey, & Iverson, 2009). This difference lies at the heart of Lindley’s paradox, and under certain assumptions comes down to a factor of ?n. As Zellner (1971, footnote 19, page 304) writes (K01 is the formula for the Bayes factor):

If a sampling theorist were to adjust his significance level upward as n grows larger, which seems reasonable, za would grow with n and tend to counteract somewhat the influence of the ?n factor in the expression for K01.

Jeffreys (1939) discusses Neyman and Pearson’s work and writes:

We should therefore get the best result, with any distribution of ?, by some form that makes the ratio of the critical value to the standard error increase with n. It appears then that whatever the distribution may be, the use of a fixed P limit cannot be the one that will make the smallest number of mistakes.

He discusses the issue more in Appendix B, where he compared his own test (Bayes factors) against Neyman-Pearson decision procedures, and he notes that:

In spite of the difference in principle between my tests and those based on the P integrals, and the omission of the latter to give the increase of the critical values for large n, dictated essentially by the fact that in testing a small departure found from a large number of observations we are selecting a value out of a long range and should allow for selection, it appears that there is not much difference in the practical recommendations. Users of these tests speak of the 5 per cent. point in much the same way as I should speak of the K = 10 point, and of the 1 per cent. point as I should speak of the K = I0-1 point; and for moderate numbers of observations the points are not very different. At large numbers of observations there is a difference, since the tests based on the integral would sometimes assert significance at departures that would actually give K > I. Thus there may be opposite decisions in such cases. But they will be very rare.

So even though extremely different conclusions between Bayes factors and frequentist tests will be rare, according to Jeffreys, when the sample size grows, the difference becomes noticeable.

This brings us to Good’s (1982) easy solution. His paper is basically just a single page (I’d love something akin to a Comments, Conjectures, and Conclusions format in Meta-Psychology! – note that Good himself was the section editor, which started with ‘Please be succinct but lucid and interesting’, and it reads just like a blog post).

He also explains the rationale in Good (1992):

‘we have empirical evidence that sensible P values are related to weights of evidence and, therefore, that P values are not entirely without merit. The real objection to P values is not that they usually are utter nonsense, but rather that they can be highly misleading, especially if the value of N is not also taken into account and is large.

Based on the observation by Jeffrey’s (1939) that, under specific circumstances, the Bayes factor against the null-hypothesis is approximately inversely proportional to ?N, Good (1982) suggests a standardized p-value to bring p-values in closer relationship with weights of evidence:

This formula standardizes the p-value to the evidence against the null hypothesis that what would be found if the pstan-value was the tail area probability observed in a sample of 100 participants (I think the formula is only intended for between designs – I would appreciate anyone weighing in in the comments if it can be extended to within-designs). When the sample size is 100, the p-value and pstan are identical. But for larger sample sizes pstan is larger than p. For example, a p = .05 observed in a sample size of 500 would have a pstan of 0.11, which is not enough to reject the null-hypothesis for the alternative. Good (1988) demonstrates great insight when he writes: ‘I guess that standardized p-values will not become standard before the year 2000.’


Good doesn’t give a lot of examples of how standardized p-values should be used in practice, but I guess it makes things easier to think about a standardized alpha level (even though the logic is the same, just like you can double the p-value, or halve the alpha level, when you are correcting for 2 comparisons in a Bonferroni correction). So instead of an alpha level of 0.05, we can think of a standardized alpha level:
Again, with 100 participants ? and ?stan are the same, but as the sample size increases above 100, the alpha level becomes smaller. For example, a ? = .05 observed in a sample size of 500 would have a ?stan of 0.02236.

So one way to justify your alpha level is by using a decreasing alpha level as the sample size increases. I for one have always thought it was rather nonsensical to use an alpha level of 0.05 in all meta-analyses (especially when testing a meta-analytic effect size based on thousands of participants against zero), or large collaborative research project such as Many Labs, where analyses are performed on very large samples. If you have thousands of participants, you have extremely high power for most effect sizes original studies could have detected in a significance test. With such a low Type 2 error rate, why keep the Type 1 error rate fixed at 5%, which is so much larger than the Type 2 error rate in these analyses? It just doesn’t make any sense to me. Alpha levels in meta-analyses or large-scale data analyses should be lowered as a function of the sample size. In case you are wondering: an alpha level of .005 would be used when the sample size is 10.000.

When designing a study based on a specific smallest effect size of interest, where you desire to have decent power (e.g., 90%), we run in to a small challenge because in the power analysis we now have two unknowns: The sample size (which is a function of the power, effect size, and alpha), and the standardized alpha level (which is a function of the sample size). Luckily, this is nothing that some R-fu can’t solve by some iterative power calculations. [R code to calculate the standardized alpha level, and perform an iterative power analysis, is at the bottom of the post]

When we wrote Justify Your Alpha (I recommend downloading the original draft before peer review because it has more words and more interesting references) one of the criticism I heard the most is that we gave no solutions how to justify your alpha. I hope this post makes it clear that statisticians have discussed that the alpha level should not be any fixed value even since it was invented. There are already some solutions available in the literature. I like Good’s approach because it is simple. In my experience, people like simple solutions. It might not be a full-fledged decision theoretical cost-benefit analysis, but it beats using a fixed alpha level. I recently used it in a submission for a Registered Report. At the same time, I think it has never been used in practice, so I look forward to any comments, conjectures, and conclusions you might have. 

References

Good, I. J. (1982). C140. Standardized tail-area probabilities. Journal of Statistical Computation and Simulation, 16(1), 65–66. https://doi.org/10.1080/00949658208810607
Good, I. J. (1988). The interface between statistics and philosophy of science. Statistical Science, 3(4), 386–397.
Good, I. J. (1992). The Bayes/Non-Bayes Compromise: A Brief Review. Journal of the American Statistical Association, 87(419), 597. https://doi.org/10.2307/2290192
Lakens, D., Adolfi, F. G., Albers, C. J., Anvari, F., Apps, M. A. J., Argamon, S. E., … Zwaan, R. A. (2018). Justify your alpha. Nature Human Behaviour, 2, 168–171. https://doi.org/10.1038/s41562-018-0311-x
Leamer, E. E. (1978). Specification Searches: Ad Hoc Inference with Nonexperimental Data (1 edition). New York usw.: Wiley.
Mudge, J. F., Baker, L. F., Edge, C. B., & Houlahan, J. E. (2012). Setting an Optimal ? That Minimizes Errors in Null Hypothesis Significance Tests. PLOS ONE, 7(2), e32734. https://doi.org/10.1371/journal.pone.0032734
Rouder, J. N., Speckman, P. L., Sun, D., Morey, R. D., & Iverson, G. (2009). Bayesian t tests for accepting and rejecting the null hypothesis. Psychonomic Bulletin & Review, 16(2), 225–237. https://doi.org/10.3758/PBR.16.2.225
Zellner, A. (1971). An introduction to Bayesian inference in econometrics. New York: Wiley.



 

Equivalence Testing and the Second Generation P-Value

Recently Blume, D’Agostino McGowan, Dupont, & Greevy (2018) published an article titled: “Second-generation p-values: Improved rigor, reproducibility, & transparency in statistical analyses”. As it happens, I would greatly appreciate more rigor, reproducibility, and transparency in statistical analyses, so my interest was piqued. On Twitter I saw the following slide, promising a updated version of the p-value that can support null-hypotheses, takes practical significance into account, has a straightforward interpretation, and ideally never needs adjustments for multiple comparisons. Now it sounded like someone found the goose that lays the golden eggs.

Upon reading the manuscript, I noticed the statistic is surprisingly similar to equivalence testing, which I’ve written about recently and created an R package for (Lakens, 2017). The second generation p-value (SGPV) relies on specifying an equivalence range of values around the null-hypothesis that are practically equivalent to zero (e.g., 0 ± 0.3). If the estimation interval falls completely within the equivalence range, the SGPV is 1. If the confidence interval lies completely outside of the equivalence range, the SGPV is 0. Otherwise the SGPV is a value between 0 and 1 that expresses the overlap of the confidence interval with the equivalence bound, divided by the total width of the confidence interval.
Testing whether the confidence interval falls completely within the equivalence bounds is equivalent to the two one-sided tests (TOST) procedure, where the data is tested against the lower equivalence bound in the first one-sided test, and against the upper equivalence bound in the second one-sided test. If both tests allow you to reject an effect as extreme or more extreme than the equivalence bound, you can reject the presence of an effect large enough to be meaningful, and conclude the observed effect is practically equivalent to zero. You can also simply check if a 90% confidence interval falls completely within the equivalence bounds. Note that testing whether the 95% confidence interval falls completely outside of the equivalence range is known as a minimum-effect test (Murphy, Myors, & Wolach, 2014).
So together with my collaborator Marie Delacre we compared the two approaches, to truly understand how second generation p-values accomplished what they were advertised to do, and what they could contribute to our statistical toolbox.
To examine the relation between the TOST p-value and the SGPV we can calculate both statistics across a range of observed effect sizes. In Figure 1 p-values are plotted for the TOST procedure and the SGPV. The statistics are calculated for hypothetical one-sample t-tests for all means that can be observed in studies ranging from 140 to 150 (on the x-axis). The equivalence range is set to 145 ± 2 (i.e., an equivalence range from 143 to 147), the observed standard deviation is assumed to be 2, and the sample size is 100. The SGPV treats the equivalence range as the null-hypothesis, while the TOST procedure treats the values outside of the equivalence range as the null-hypothesis. For ease of comparison we can reverse the SGPV (by calculating 1-SGPV), which is used in the plot below.
 
 
Figure 1: Comparison of p-values from TOST (black line) and 1-SGPV (dotted grey line) across a range of observed sample means (x-axis) tested against a mean of 145 in a one-sample t-test with a sample size of 30 and a standard deviation of 2.
It is clear the SGPV and the p-value from TOST are very closely related. The situation in Figure 1 is not an exception – in our pre-print we describe how the SGPV and the p-value from the TOST procedure are always directly related when confidence intervals are symmetrical. You can play around with this Shiny app as confirm this for yourself: http://shiny.ieis.tue.nl/TOST_vs_SGPV/.
There are 3 situations where the p-value from the TOST procedure and the SGPV are not directly related. The SGPV is 1 when the confidence interval falls completely within the equivalence bounds. P-values from the TOST procedure continue to differentiate and will for example distinguish between a p = 0.048 and p = 0.002. The same happens when the SGPV is 0 (and p-values fall between 0.975 and 1).
The third situation when the TOST and SGPV differ is when the ‘small sample correction’ is at play in the SGPV. This “correction” kicks in whenever the confidence interval is wider than the equivalence range. However, it is not a correction in the typical sense of the word, since the SGPV is not adjusted to any ‘correct’ value. When the normal calculation would be ‘misleading’ (i.e., the SGPV would be small, which normally would suggest support for the alternative hypothesis, when all values in the equivalence range are also supported), the SGPV is set to 0.5 which according to Blume and colleagues signal the SGPV is ‘uninformative’.In all three situations the p-value from equivalence tests distinguishes between scenarios where the SGPV yields the same result.
We can examine this situation by calculating the SGPV and performing the TOST for a situation where sample sizes are small and the equivalence range is narrow, such that the CI is more than twice as large as the equivalence range.
 
Figure 2: Comparison of p-values from TOST (black line) and SGPV (dotted grey line) across a range of observed sample means (x-axis). Because the sample size is small (n = 10) and the CI is more than twice as wide as the equivalence range (set to -0.4 to 0.4), the SGPV is set to 0.5 (horizontal light grey line) across a range of observed means.

The main novelty of the SGPV is that it is meant to be used as a descriptive statistic. However, we show that the SGPV is difficult to interpret when confidence intervals are asymmetric, and when the ‘small sample correction’ is operating. For an extreme example, see Figure 3 where the SGPV’s are plotted for a correlation (where confidence intervals are asymmetric). 

Figure 3: Comparison of p-values from TOST (black line) and 1-SGPV (dotted grey curve) across a range of observed sample correlations (x-axis) tested against equivalence bounds of r = 0.4 and r = 0.8 with n = 10 and an alpha of 0.05.
Even under ideal circumstances, the SGPV is mainly meaningful when it is either 1, 0, or inconclusive (see all examples in Blume et al., 2018). But to categorize your results into one of these three outcomes you don’t need to calculate anything – you can just look at whether the confidence interval falls inside, outside, or overlaps with the equivalence bound, and thus the SGPV loses its value as a descriptive statistic. 
When discussing the lack of a need for error correction, Blume and colleagues compare the SGPV to null-hypothesis tests. However, the more meaningful comparison is with the TOST procedure, and given the direct relationship, not correcting for multiple comparisons will inflate the probability of concluding the absence of a meaningful effect in exactly the same way as when calculating p-values for an equivalence test. Equivalence tests provide an easier and more formal way to control both Type I error rates (by setting the alpha level) and the Type II error rate (by performing an a-priori power analysis, see Lakens, Scheele, & Isager, 2018).
Conclusion
There are strong similarities between p-values from the TOST procedure and the SGPV, and in all situations where the statistics yield different results, the behavior of the p-value from the TOST procedure is more consistent and easier to interpret. More details can be found in our pre-print (where you can also leave comments or suggestions for improvement using hypothes.is). Our comparisons show that when proposing alternatives to null-hypothesis tests, it is important to compare new proposals to already existing procedures. We believe equivalence tests achieve the goals of the second generation p-value while allowing users to more easily control error rates, and while yielding more consistent statistical outcomes.


References
Blume, J. D., D’Agostino McGowan, L., Dupont, W. D., & Greevy, R. A. (2018). Second-generation p-values: Improved rigor, reproducibility, & transparency in statistical analyses. PLOS ONE, 13(3), e0188299. https://doi.org/10.1371/journal.pone.0188299
Lakens, D. (2017). Equivalence Tests: A Practical Primer for t Tests, Correlations, and Meta-Analyses. Social Psychological and Personality Science, 8(4), 355–362. https://doi.org/10.1177/1948550617697177
Lakens, D., Scheel, A. M., & Isager, P. M. (2018). Equivalence Testing for Psychological Research: A Tutorial. Advances in Methods and Practices in Psychological Science, 2515245918770963. https://doi.org/10.1177/2515245918770963.
Murphy, K. R., Myors, B., & Wolach, A. H. (2014). Statistical power analysis: a simple and general model for traditional and modern hypothesis tests (Fourth edition). New York: Routledge, Taylor & Francis Group.