This blog post is a review of a manuscript that hopefully will never be published, but it probably will be. In that case, it is a draft for a PubPeer comment. As the ms. is under review, I cannot share the actual ms., but the review makes clear what the authors are trying to do.
Review
I assume that I was selected as a reviewer for this manuscript because the editor recognized my expertise in this research area. While most of my work on replicability has been published in the form of blog posts, I have also published a few peer-reviewed publications that are relevant to this topic. Most important, I have provided estimates of replicability for social psychology using the most advanced method to do so, z-curve (Bartos & Schimmack, 2020; Brunner & Schimmack, 2020), using the extensive coding by Motyl et al. (2017) (see Schimmack, 2020). I was surprised that this work was not mentioned.
In contrast, Yeager et al.’s (2019) replication study of 12 experiments is cited and as I recall 11 of the 12 studies replicated successfully. So, it is not clear why this study is cited as evidence that replication attempts often “producing pessimistic results”
While I agree that there are many explanations that have been offered for replication failures, I do not agree that listing all of these explanations is impossible and that it is reasonable to focus on some of these explanations, especially if the main reason is left out. Namely, the main reason for replication failures is that original studies are conducted with low statistical power and only those that achieve significance are published (Sterling et al., 1995; Schimmack, 2020). Omitting this explanation undermines the contribution of this article.
The listed explanations are
(1) original articles making use of questionable research practices that result in Type I errors
This explanation conflates two problems. QRPs are used to get significance when power to do so is low, but we do not know whether the population effect size is zero (type-I error) or above zero (type-II error).
(2) original research’s pursuit of counterintuitive findings that may have lower a priori probabilities and thus poor chances at replication
This explanations assumes that there are a lot of type-I errors, but we don’t really know whether the population effect size is zero or not. So, this is not a separate explanation, but rather an explanation why we might have many type-I errors assuming that we do have many type-I errors, which we do not know.
(3) the presence of unexamined moderators that produce differences between original and replication research (Dijksterhuis, 2014; Simons et al., 2017),
This citation ignores that empirical tests of this hypothesis have failed to provide evidence for it (van Bavel et al., 2016).
4) specific design choices in original or replication research that produce different conclusions (Bouwmeester et al., 2017; Luttrell et al., 2017; Noah et al., 2018).
This argument is not different from (3). Replication failures are attributed to moderating factors that are always possible because exact replications are impossible.
To date, discussions of possible explanations for poor replication have generally been presented as distinct accounts for poor replication, with little attempt being made to organize them into a coherent conceptual framework.
This claim ignores my detailed discussion of the various explanations including some not discussed by the authors (Schooler decline effect; Fiedler, regression to the mean; Schimmack, 2020).
The selection of journals is questionable. Psychological Science is not a general (meta)-psychological journal. Instead there are two journals, The Journal of General Psychology and Meta-Psychology that contain relevant articles.
The authors then introduce Cook and Campbell’s typology of validity and try to relate it to accounts of replication failures based on some work by Fabrigar et al. (2020). This attempt is flawed because validity is a broader construct than replicability or reliability. Measures can be reliable and correlations can be replicable even if the conclusions drawn from these findings are invalid. This is Intro Psych level stuff.
Statistical conclusion validity is concerned with the question of “whether or not two or more variables are related.” This is of course nothing else than the distinction between true and false conclusions based on significant or non-significant results. As noted above, even statistical conclusion validity is not directly related to replication failures because replication failures do not tell us whether the population effect size is zero or not. Yet, we might argue that there is a risk of false positive conclusions when statistical significance is achieved with QRPs and these results do not replicate. So, in some sense statistical conclusion validity is tied to the replication crisis in experimental social psychology.
Internal validity is about the problem of inferring causality from correlations. This issue has nothing to do with the replication crisis because replication failures can occur in experiments and correlational studies. The only indirect link to internal validity is that experimental social psychology prided itself on the use of between-subject experiments to maximize internal validity and minimize demand effects, but often used ineffective manipulations (priming) that required QRPs to get significance especially in the tiny samples that were used because experiments are more time-consuming and labor intensive. In contrast, survey studies often are more replicable because they have larger samples. But the key point remains, it would be absurd to explain replication failures directly as a function of low internal validity.
Construct validity is falsely described as “the degree to which the operationalizations used in the research effectively capture their intended constructs.” The problem here is the term operationalization. Once a construct is operationalized with some procedure, it is defined by the procedure (intelligence is what the IQ test measures) and there is no way to challenge the validity of the construct. In contrast, measurement implies that constructs exist independent of one specific procedure and it is possible to examine how well a measure reflects variation in the construct (Cronbach & Meehl, 1955). That said, there is no relationship between construct validity and replicability because systematic measurement error can produce spurious correlations between measures in correlational studies that are highly replicable (e.g., social desirable responding). In experiments, systematic measurement error will attenuate effect sizes, but it will do so equally in original studies and replication studies. Thus, low construct validity also provides no explanation for replication failures.
External validity is defined as “the degree to which an effect generalizes to different populations and contexts” This validation criterion is also only slightly related to replication failures when there are concerns about contextual sensitivity or hidden moderators. A replication study in a different population or context might fail because the population effect size varies across populations or contexts. While this is possible, there is little evidence that contextual sensitivity is a major factor.
In short, it is a red herring in explanations for replication failures or the replication crisis to talk about validity. Replicability is necessary but not sufficient for good science.
It is therefore not surprising that the authors found most discussions of replication failures focus on statistical conclusion validity. Any other finding would make no sense. It is just not clear why we needed a text analysis to reveal this.
However, the authors seem to be unable to realize that the other types of validity are not related to replication failures when they write “What does this study add? Identifies that statistical conclusion validity is over-emphasized in replication analysis”
Over-emphasized??? This is an absurd conclusion based on a failure to make a clear distinction between replicability/reliability and validity.
The z-curve analysis of results in this journal shows (a) that many published results are based on studies with low to modest power, (b) selection for significance inflates effect size estimates and the discovery rate of reported results, and (c) there is no evidence that research practices have changed over the past decade. Readers should be careful when they interpret results and recognize that reported effect sizes are likely to overestimate real effect sizes, and that replication studies with the same sample size may fail to produce a significant result again. To avoid misleading inferences, I suggest using alpha = .005 as a criterion for valid rejections of the null-hypothesis. Using this criterion, the risk of a false positive result is below 2%. I also recommend computing a 99% confidence interval rather than the traditional 95% confidence interval for the interpretation of effect size estimates.
Given the low power of many studies, readers also need to avoid the fallacy to report non-significant results as evidence for the absence of an effect. With 50% power, the results can easily switch in a replication study so that a significant result becomes non-significant and a non-significant result becomes significant. However, selection for significance will make it more likely that significant results become non-significant than observing a change in the opposite direction.
The average power of studies in a heterogeneous journal like Frontiers of Psychology provides only circumstantial evidence for the evaluation of results. When other information is available (e.g., z-curve analysis of a discipline, author, or topic, it may be more appropriate to use this information).
Report
Frontiers of Psychology was created in 2010 as a new online-only journal for psychology. It covers many different areas of psychology, although some areas have specialized Frontiers journals like Frontiers in Behavioral Neuroscience.
The business model of Frontiers journals relies on publishing fees of authors, while published articles are freely available to readers.
The number of articles in Frontiers of Psychology has increased quickly from 131 articles in 2010 to 8,072 articles in 2022 (source Web of Science). With over 8,000 published articles Frontiers of Psychology is an important outlet for psychological researchers to publish their work. Many specialized, print-journals publish fewer than 100 articles a year. Thus, Frontiers of Psychology offers a broad and large sample of psychological research that is equivalent to a composite of 80 or more specialized journals.
Another advantage of Frontiers of Psychology is that it has a relatively low rejection rate compared to specialized journals that have limited journal space. While high rejection rates may allow journals to prioritize exceptionally good research, articles published in Frontiers of Psychology are more likely to reflect the common research practices of psychologists.
To examine the replicability of research published in Frontiers of Psychology, I downloaded all published articles as PDF files, converted PDF files to text files, and extracted test-statistics (F, t, and z-tests) from published articles. Although this method does not capture all published results, there is no a priori reason that results reported in this format differ from other results. More importantly, changes in research practices such as higher power due to larger samples would be reflected in all statistical tests.
As Frontiers of Psychology only started shortly before the replication crisis in psychology increased awareness about the problem of low statistical power and selection for significance (publication bias), I was not able to examine replicability before 2011. I also found little evidence of changes in the years from 2010 to 2015. Therefore, I use this time period as the starting point and benchmark for future years.
Figure 1 shows a z-curve plot of results published from 2010 to 2014. All test-statistics are converted into z-scores. Z-scores greater than 1.96 (the solid red line) are statistically significant at alpha = .05 (two-sided) and typically used to claim a discovery (rejection of the null-hypothesis). Sometimes even z-scores between 1.65 (the dotted red line) and 1.96 are used to reject the null-hypothesis either as a one-sided test or as marginal significance. Using alpha = .05, the plot shows 71% significant results, which is called the observed discovery rate (ODR).
Visual inspection of the plot shows a peak of the distribution right at the significance criterion. It also shows that z-scores drop sharply on the left side of the peak when the results do not reach the criterion for significance. This wonky distribution cannot be explained with sampling error. Rather it shows a selective bias to publish significant results by means of questionable practices such as not reporting failed replication studies or inflating effect sizes by means of statistical tricks. To quantify the amount of selection bias, z-curve fits a model to the distribution of significant results and estimates the distribution of non-significant (i.e., the grey curve in the range of non-significant results). The discrepancy between the observed distribution and the expected distribution shows the file-drawer of missing non-significant results. Z-curve estimates that the reported significant results are only 31% of the estimated distribution. This is called the expected discovery rate (EDR). Thus, there are more than twice as many significant results as the statistical power of studies justifies (71% vs. 31%). Confidence intervals around these estimates show that the discrepancy is not just due to chance, but active selection for significance.
Using a formula developed by Soric (1989), it is possible to estimate the false discovery risk (FDR). That is, the probability that a significant result was obtained without a real effect (a type-I error). The estimated FDR is 12%. This may not be alarming, but the risk varies as a function of the strength of evidence (the magnitude of the z-score). Z-scores that correspond to p-values close to p =.05 have a higher false positive risk and large z-scores have a smaller false positive risk. Moreover, even true results are unlikely to replicate when significance was obtained with inflated effect sizes. The most optimistic estimate of replicability is the expected replication rate (ERR) of 69%. This estimate, however, assumes that a study can be replicated exactly, including the same sample size. Actual replication rates are often lower than the ERR and tend to fall between the EDR and ERR. Thus, the predicted replication rate is around 50%. This is slightly higher than the replication rate in the Open Science Collaboration replication of 100 studies which was 37%.
Figure 2 examines how things have changed in the next five years.
The observed discovery rate decreased slightly, but statistically significantly, from 71% to 66%. This shows that researchers reported more non-significant results. The expected discovery rate increased from 31% to 40%, but the overlapping confidence intervals imply that this is not a statistically significant increase at the alpha = .01 level. (if two 95%CI do not overlap, the difference is significant at around alpha = .01). Although smaller, the difference between the ODR of 60% and the EDR of 40% is statistically significant and shows that selection for significance continues. The ERR estimate did not change, indicating that significant results are not obtained with more power. Overall, these results show only modest improvements, suggesting that most researchers who publish in Frontiers in Psychology continue to conduct research in the same way as they did before, despite ample discussions about the need for methodological reforms such as a priori power analysis and reporting of non-significant results.
The results for 2020 show that the increase in the EDR was a statistical fluke rather than a trend. The EDR returned to the level of 2010-2015 (29% vs. 31), but the ODR remained lower than in the beginning, showing slightly more reporting of non-significant results. The size of the file drawer remains large with an ODR of 66% and an EDR of 72%.
The EDR results for 2021 look again better, but the difference to 2020 is not statistically significant. Moreover, the results in 2022 show a lower EDR that matches the EDR in the beginning.
Overall, these results show that results published in Frontiers in Psychology are selected for significance. While the observed discovery rate is in the upper 60%s, the expected discovery rate is around 35%. Thus, the ODR is nearly twice the rate of the power of studies to produce these results. Most concerning is that a decade of meta-psychological discussions about research practices has not produced any notable changes in the amount of selection bias or the power of studies to produce replicable results.
How should readers of Frontiers in Psychology articles deal with this evidence that some published results were obtained with low power and inflated effect sizes that will not replicate? One solution is to retrospectively change the significance criterion. Comparisons of the evidence in original studies and replication outcomes suggest that studies with a p-value below .005 tend to replicate at a rate of 80%, whereas studies with just significant p-values (.050 to .005) replicate at a much lower rate (Schimmack, 2022). Demanding stronger evidence also reduces the false positive risk. This is illustrated in the last figure that uses results from all years, given the lack of any time trend.
In the Figure the red solid line moved to z = 2.8; the value that corresponds to p = .005, two-sided. Using this more stringent criterion for significance, only 45% of the z-scores are significant. Another 25% were significant with alpha = .05, but are no longer significant with alpha = .005. As power decreases when alpha is set to more stringent, lower, levels, the EDR is also reduced to only 21%. Thus, there is still selection for significance. However, the more effective significance filter also selects for more studies with high power and the ERR remains at 72%, even with alpha = .005 for the replication study. If the replication study used the traditional alpha level of .05, the ERR would be even higher, which explains the finding that the actual replication rate for studies with p < .005 is about 80%.
The lower alpha also reduces the risk of false positive results, even though the EDR is reduced. The FDR is only 2%. Thus, the null-hypothesis is unlikely to be true. The caveat is that the standard null-hypothesis in psychology is the nil-hypothesis and that the population effect size might be too small to be of practical significance. Thus, readers who interpret results with p-values below .005 should also evaluate the confidence interval around the reported effect size, using the more conservative 99% confidence interval that correspondence to alpha = .005 rather than the traditional 95% confidence interval. In many cases, this confidence interval is likely to be wide and provide insufficient information about the strength of an effect.
Social psychology has an open secret. For decades, social psychologists conducted experiments with low statistical power (i.e., even if the predicted effect is real, their study could not detect it with p < .05), but their journals were filled with significant (p < .05) results. To achieve significant results, social psychologists used so-called questionable research practices that most lay people or undergraduate students consider to be unethical. The consequences of these shady practices became apparent in the past decade when influential results could not be replicated. The famous reproducibility project estimated that only 25% of published significant results are replicable. Most undergraduate students who learn about this fact are shocked and worry about the credibility of results in their social psychology textbooks.
Today, there are two types of social psychologists. Some are actively trying to improve the credibility of social psychology by adopting open science practices such as preregistration of hypothesis, sharing open data, and publishing non-significant results rather than hiding these findings. However, other social psychologists are actively trying to deflect criticism. Unfortunately, it can be difficult for lay people, journalists, or undergraduate students to make sense of articles that make seemingly valid arguments, but only serve the purpose to protect the image of social psychology as a science.
As somebody who has followed the replication crisis in social psychology for the past decade, I can provide some helpful information. In the blog post , I want to point out that Duane T. Wegener and Leandre R. Fabrigar have made numerous false arguments against critics of social psychology, and that their latest article “Evaluating Research in Personality and Social Psychology: Considerations of Statistical Power and Concerns About False Findings” ignores the replication crisis in social psychology and the core problem of selectively publishing significant results from underpowered studies.
The key point of their article is that “statistical power should be de-emphasized in comparison to current uses in research evaluation” (p. 1105).
To understand why this is a strange recommendation, it is important to understand that power is simply the probability of producing evidence for an effect, when an effect exists. When the criterion for evidence is a p-value below .05, it means the probability of obtaining this desired outcome. One advantage of high power is that researchers get the correct result. In contrast, a study with low power is likely to produce the wrong result called a type-II error. While the study tested a correct hypothesis, the results fail to provide sufficient support for it. As these failures can be due to many reasons (low power or the theory is wrong), they are difficult to interpret and to publish. Often these studies remain unpublished, the published record is biased, and resources were wasted. Thus, high power is a researcher’s friend. To make a comparison, if you could gamble on a slot machine with a 20% chance of winning or an 80% chance of winning, which machine would you pick? The answer is simple. Everybody would rather want to win. The problem is only that researchers have to invest more resources in a single study to increase power. They may not have enough money or time to do so. So, they are more like desperate gamblers. You need a publication, you don’t have enough resources for a well-powered study, so you do a low powered study and hope for the best. Of course, many desperate gamblers lose and are then even more desperate. That is where the analogy ends. Unlike gamblers in a casino, researchers are their own dealers and can use a number of tricks to get the desired outcome (Simmons et al., 2011). Suddenly, a study with only 20% power (chance of winning honestly) can have a chance of winning of 80% or more.
This brings us to the second advantage of high-powered studies. Power determines the outcome of a close replication study. If a researcher conducted a study with 20% power and found some tricks to get significance, the probability of replicating the result honestly is again only 20%. Many unsuspecting graduate students have wasted precious years trying to build on studies that they were not able to replicate. Unless they quickly learned the dark art of obtaining significant results with low power, they did not have a competitive CV to get a job. Thus, selective publishing of underpowered studies is demoralizing and rewards cheating.
None of this is a concern for Wegener and Fabrigar, who do not cite influential articles about the use of questionable research practices (John et al., 2012) or my own work that uses estimates of observed power to reveal those practices (Schimmack, 2012; see also Francis, 2012). Instead, they suggest that “problems with the overuse of power arise when the pre-study concept of power is used retrospectively to evaluate completed research” (p. 1115). The only problem that arises from estimating actual power of completed studies, however, is the detection of questionable practices that produce more reported significant results (often 100%) than one would expect given the low power to do so. Of course, for researchers who want to use QRPs to produce inflated evidence for their theories, this is a a problem However, for consumers of research, the detection of questionable results is desirable so that they can ignore this evidence in favor of honestly reported results based on properly powered studies.
The bulk of Wegener and Fabrigar’s article discusses the relationship between power and the probability of false positive results. A false positive result occurs when a statistically significant result is obtained in the absence of a real effect. The standard criterion of statistical significance, p < .05, states that a researcher that tests 100 false hypothesis without a real effect is expected to obtain 95 non-significant results and 5 false positive results. This may sound sufficient to keep false positive results at a low level. However, the false positive risk is a conditional probability based on a significant result. If a researcher conducts 100 studies, obtains 5 significant results, and interprets these results as real effects, the researcher has a false positive rate of 100% because 5 significant results are expected by chance along. An honest researcher would conclude from a series of studies with only 5 out of 100 significant results that they found no evidence for a real effect.
Now let’s consider a researcher that conducted 100 studies and obtained 24 significant results. As 24 is a lot more than the expected 5 studies by chance along, the researcher can conclude that at least some of the 24 significant results are caused by real effects. However, it is also possible that some of these results are false positives. Soric (1989 – not cited by Wegener and Fabrigar – derived a simple formula to estimate the false discover risk. The formula makes the assumption that studies of real effects have 100% power to detect a real effect. As a result, there are zero studies that fail to provide evidence for a real effect. This assumption makes it possible to estimate the maximum percentage of false positive results.
In this simple example, we have 4 false positive results and 20 studies with evidence for a real effect. Thus, the false positive risk is 4 / 24 = 17%. While 17% is a lot more than 5%, it is still pretty low and doesn’t warrant claims that “most published results are false” (Ioannidis, 2005). Yet, it is also not very reassuring if 17% of published results might be false positives (e.g., 17% of cancer treatments actually do not work). Moreover, based on a single study, we do not know which of the 24 results are true results and false results. With a probability of 17% (1/6), trusting a result is like playing Russian roulette. The solution to this problem is to conduct a replication study. In our example, the 20 true effects will produce significant results again because they were obtained with 100% power to do so. However, the chance to replicate one of the 4 false positive results is only 5/100 * 5 / 100 = 25 / 10,000 = 0.25%. So, with high-powered studies, a single replication study can separate true and false original findings.
Things look different in a world with low powered studies. Let’s assume that studies have only 25% power to produce a significant result, which is in accordance with the success rate in replication studies in social psychology (Open Science Collaboration, 2005).
In this scenario, there is only 1 false positive result and the false positive risk is only 1 out of 21, ~ 5%. Of course, researchers do not know this and have to wonder whether some of 21 significant results are false positives. When they conduct a replication study, only 6 (25/100 * 25/100) of their 20 significant results replicate. Thus, a single replication study does not help to distinguish true and false findings. This leads to confusion and the need for additional studies to separate true and false findings, but low power will produce inconsistent results again and again. The consequences can be seen in the actual literature in social psychology. Many literatures are a selected set of inconsistent results that do not advance theories.
In sum, high powered studies quickly separate true and false findings, whereas low powered studies produce inconsistent results that make it difficult to separate true and false findings (Maxwell, 2004, not cited by Wegener & Fabrigar).
Actions speak louder than Words
Over the past decade, my collaborators and I I have developed powerful statistical tools to estimate the power of studies that were conducted (Bartos & Schimmack, 2021; Brunner & Schimmack, 2022; Schimmack, 2012). In combination with Soric’s (1989) formula, estimates of actual power can also be used to estimate the real false positive risk. Below, I show some results when this method is applied to social psychology. I focus on the journal Personality and Social Psychological Bulletin for two reasons. First, Wegener and Fabrigar were co-editors of this journal right after concerns about questionable research practices and low power became a highly discussed topic and some journal editors changed policies to increase replicability of published results (e.g., Steven Lindsay at Psychological Science). Examining the power of studies published in PSPB when Wegener and Fabrigar were editors provides objective evidence about their actions in response to concerns about replication failures in social psychology. Another reason to focus on PSPB is that Wegener and Fabrigar published their defense of low powered research in this journal, suggesting a favorable attitude towards their position by the current editors. We can therefore examine whether the current editors changed standards or not. Finally, PSPB was edited from 2017 to 2021 by Chris Crandall, who has been a vocal defender of results obtained with questionable research practices on social media.
Let’s start with the years before concerns about replication failures became openly discussed. I focus on the years 2000 to 2012.
Figure 1 shows a z-curve plot of automatically extracted statistical results published in PSPB from 2000 to 2012. All statistical results are converted into z-scores. A z-curve plot is a histogram that shows the distribution of z-scores. One important aspect of a z-curve plot is the percentage of significant results. All z-scores greater than 1.96 (the solid vertical red line) are statistically significant with p < .05 (two-sided). Visual inspection shows a lot more significant results than non-significant results. More precisely, the percentage of significant results (i.e., the Observed Discovery Rate, EDR) is 71%.
Visual inspection of the histogram also shows a strange shape to the distribution of z-scores. While the peak of the distribution is at the point of significance, the shape of the distribution shows a rather steep drop of z-scores just below z = 1.96. Moreover, some of these z-scores are still used to claim support for a hypothesis often called marginally significant. Only z-scores below 1.65 (p < .10, two-sided or .0 5 one-sided, the dotted red line) are usually interpreted as non-significant results. The distribution shows that these results are less likely to be reported. This wonky distribution of z-scores suggests that questionable research practices were used.
Z-curve analysis makes it possible to estimate statistical power based on the distribution of statistically significant results only. Without going into the details of this validated method, the results suggest that the power of studies (i.e., the expected discovery rate, EDR) would only produce 23% significant results. Thus, the actual percentage of 71% significant results is inflated by questionable practices. Moreover, the 23% estimate is consistent with the fact that only 25% of unbiased replication studies produce a significant result (Open Science Collaboration, 2005). With 23% significant results, Soric’s formula yields a false positive risk of 18%. That means, roughly 1 out of 5 published results could be a false positive result.
In sum, while Wegener and Fabrigar do not mention replication failures and questionable research practices, the present results confirm the explanation of replication failures in social psychology as a consequence of using questionable research practices to inflate the success rate of studies with low power (Schimmack, 2020).
Figure 2 shows the z-curve plot for results published during Wegener and Fabrigar’s reign as editors. The results are easily summarized. There is no significant change. Social psychologists continued to publish ~70% significant results with only 20% power to do so. Wegener and Fabrigar might argue that there was not enough time to change practices in response to concerns about questionable practices. However, their 2022 article provides an alternative explanation. They do not consider it a problem when researchers conduct underpowered studies. Rather, the problem is when researchers like me estimate the actual power of studies and reveal that massive use of questionable practices.
The next figure shows the results for Chris Chrandall’s years as editor. While the percentage of significant results remained at 70%, power to produce these results increased to 32%. However, there is uncertainty about this increase and the lower limit of the 95%CI is still only 21%. Even if there was an increase, it would not imply that Chris Crandall caused this increase. A more plausible explanation is that some social psychologists changed their research practices and some of this research was published in PSPB. In other words, Chris Crandall and his editorial team did not discriminate against studies with improved power.
It is too early to evaluate the new editorial team lead by Michael D. Robinson, but for the sake of completeness, I am also posting the results for the last two years. The results show a further increase in power to 48%. Even the lower limit of the confidence interval is now 36%. Thus, even articles published in PSPB are becoming more powerful, much to the dismay of Wegener and Fabrigar, who believe that “the recent overemphasis on statistical power should be replaced by a broader approach in which statistical and conceptual forms of validity are considered together” (p. 1114). In contrast, I would argue that even an average power of 48% is ridiculously low. An average power of 48% implies that many studies have even less than 48% power.
Conclusion
More than 50 years ago, famous psychologists Amos Tversky and Daniel Kahneman (1971) wrote “we refuse to believe that a serious investigator will knowingly accept a .50 risk of failing to confirm a valid research hypothesis” (p. 110). Wegener and Fabrigar prove them wrong. Not only are they willing to conduct these studies, they even propose that doing so is scientific and that demanding more power can have many negative side-effects. Similar arguments have been made by other social psychologists (Finkel, Eastwick, Reis, 2017).
I am siding with Kahneman, who realized too late that he placed too much trust in questionable results produced by social psychologists and compared some of this research to a train wreck (Kahneman, 2017). However, there is no consensus among psychologists and readers of social psychological research have to make up their own mind. This blog post only points out that social psychology lacks clear scientific standards and no proper mechanism to ensure that theoretical claims rest on solid empirical foundations. Researchers are still allowed to use questionable research practices to present overly positive results. At this point, the credibility of results depends on researchers’ willingness to embrace open science practices. While many young social psychologists are motivated to do so, Wegener and Fabrigar’s article shows that they are facing resistance from older social psychologists who are trying to defend the status quo of underpowered research.
I am not the first and I will not be the last to point out that the traditional peer-review process is biased. After all, who would take on the thankless job of editing a journal if it would not come with the influence and power to select articles you like and to reject articles you don’t like. Authors can only hope that they find an editor who favors their story during the process of shopping around a paper. This is a long and frustrating process. My friend Rickard Carlsson created a new journal that operates differently with a transparent review process and virtually no rejection rate. Check out Meta-Psychology. I published two articles there that reported results based on math and computer simulations. Nobody challenged the validity, but other journals rejected the work based on politics (AMMPS rejection).
The biggest event in psychology, especially social psychology, in the past decade (2011-2020) was the growing awareness of the damage caused by selective publishing of significant results. It has long been known that psychology journals nearly exclusively publish statistically significant results (Sterling, 1959). This made it impossible to publish studies with non-significant results that could correct false positive results. It was long assumed that this was not a problem because false positive results are rare. What changed over the past decade was that researchers published replication failures that cast doubt on numerous classic findings in social psychology such as unconscious priming or ego-depletion.
Many, if not most, senior social psychologists have responded to the replication crisis in their field with a variety of defense mechanisms, such as repression or denial. Some have responded with intellectualization/rationalization and were able to publish their false arguments to dismiss replication failures in peer-reviewed journals (Bargh, Baumeister, Gilbert, Fiedler, Fiske, Nisbett, Stroebe, Strack, Wilson, etc., to name the most prominent ones). In contrast, critics had a harder time to make their voices heard. Most of my work on this topic has been published in blog posts in part because I don’t have the patience and frustration tolerance to deal with reviewer comments. However, this is not the only reason and in this blog post I want to share what happened when Moritz Heene and I were invited by Christiph Klauer to write an article on this topic for the German journal “Psychological Rundschau”.
For readers who do not know Christipher; he is a very smart social psychologists who worked as an assistant professor with Hubert Feger when I was an undergraduate student. I respect his intelligence and his work such as his work on the Implicit Association Test.
Maybe he invited us to write a commentary because he knew me personally. Maybe he respected what we had to say. In any case, we were invited to write an article and I was motivated to get an easy ‘peer-reviewed’ publication, even if nobody outside of Germany cares about a publication in this journal.
After submitting our manuscript, I received the following response in German. I used http://www.DeepL.com/Translator (free version) to share an English version.
Thu 2016-04-14 3:50 AM
Dear Uli,
Thank you very much for the interesting and readable manuscript. I enjoyed reading it and can agree with most of the points and arguments. I think this whole debate will be good for psychology (and hopefully social psychology as well), even if some are struggling at the moment. In any case, the awareness of the harmfulness of some previously widespread habits and the realization of the importance of replication has, in my impression, increased significantly among very many colleagues in the last two to three years.
Unfortunately, for formal reasons, the manusrkipt does not fit so well into the planned special issue. As I said, the aim of the special issue is to discuss topics around the replication question in a more fundamental way than is possible in the current discussions and forums, with some distance from the current debates. The article fits very well into the ongoing discussions, with which you and Mr. Heene are explicitly dealing with, but it misses the goal of the special issue. I’m sorry if there was a misunderstanding.
That in itself would not be a reason for rejection, but there is also the fact that a number of people and their contributions to the ongoing debates are critically discussed. According to the tradition of the Psychologische Rundschau, each of them would have to be given the opportunity to respond in the issue. Such a discussion, however, would go far beyond the intended scope of the thematic issue. It would also pose great practical difficulties, because of the German language, to realize this with the English-speaking authors (Ledgerwood; Feldman Barrett; Hewstone, however, I think can speak German; Gilbert). For example, you would have to submit the paper in an English version as well, so that these authors would have a chance to read the criticisms of their statements. Their comments would then have to be translated back into German for the readers of Psychologische Rundschau.
All this, I am afraid, is not feasible within the scope of the special issue in terms of the amount of space and time available. Personally, as I said, I find most of your arguments in the manuscript apt and correct. From experience, however, it is to be expected that the persons criticized will have counter-arguments, and the planned special issue cannot and should not provide such a continuation of the ongoing debates in the Psychologische Rundschau. We currently have too many discussion forums in the Psychologische Rundschau, and I do not want to open yet another one.
I ask for your understanding and apologize once again for apparently not having communicated the objective of the special issue clearly enough. I hope you and Mr. Heene will not hold this against me, even though I realize that you will be disappointed with this decision. However, perhaps the manuscript would fit well in one of the Internet discussion forums on these issues or in a similar setting, of which there are several and which are also emerging all the time. For example, I think the Fachgruppe Allgemeine Psychologie is currently in the process of setting up a new discussion forum on the replicability question (although there was also a deadline at the end of March, but perhaps the person responsible, Ms. Bermeitinger from the University of Hildesheim, is still open for contributions).
I am posting this letter now because the forced resignation of Fiedler as editor of Perspectives on Psychological Science made it salient how political publishing in psychology journals is. While many right-wing media commented on this event to support their anti-woke, pro-doze culture wars. They want to maintain the illusion that current science, I focus on psychology here, is free of ideology and only interested in searching for the truth. This is BS. Psychologists are human beings and show in-group bias. When most psychologists in power are old, White, men, they will favor old, White, men that are like them. Like all systems that work for the people in power, they want to maintain the status quo. Fiedler abused his power to defend the status quo against criticisms of a lack in diversity. He also published several articles to defend (social) psychology against accusations of shoddy practices (questionable research practices).
I am also posting it here because a very smart psychologists stated in private that he agreed with many of our critical comments that we made about replication-crisis deniers. As science is a social game, it is understandable that he never commented on this topic in public (If he doesn’t like that I am making them public, he can say that he was just polite and didn’t really mean what he wrote).
I published a peer-reviewed article on the replication crisis and the shameful response by many social psychologists several years later (Schimmack, 2020). A new generation of social psychologists is trying to correct the mistakes of the previous generation, but as so often, they do so without the support or even against the efforts of the old guard that cannot accept that many of their cherished findings may die with them. But that is life.
One of the bigger stories in Psychological (WannaBe) Science was the forced resignation of Klaus Fiedler from his post as editor-in-chief at the prestigious journal “Perspectives on Psychological Science.” In response to his humiliating eviction, Klaus Fiedler declared “I am the victim.”
In an interview, he claimed that the his actions that led to the vote of no confidence by the Board of Directors of the Association of Psychological Science (APS) were “completely fair, respectful, and in line with all journal standards.” In contrast, the Board of Directors listed several violations of editorial policies and standards.
The APS board listed the following complaints.
accept an article criticizing the original article based on three reviews that were also critical of the original article and did not reflect a representative range of views on the topic of the original article;
invite the three reviewers who reviewed the critique favorably to themselves submit commentaries on the critique;
accept those commentaries without submitting them to peer review; and,
inform the author of the original article that his invited reply would also not be sent out for peer review. The EIC then sent that reply to be reviewed by the author of the critical article to solicit further comments.
As bystanders, we have to decide whether these accusations by several board members are accurate or whether these are trumped up charges that misrepresent the facts and Fiedler is an innocent victim. Even without specific knowledge about this incidence and the people involved, bystanders are probably forming an impression about Fiedler and his accusers. First, it is a natural human response to avoid embarrassment after a public humiliation. Thus, Fiedler’s claims of no wrong-doing have to be taken with a grain of salt. On the other hand, APS board members could also have motives to distort the facts, although they are less obvious.
To understand the APS board’s responses to Fiedler’s actions, it is necessary to take into account that Fiedler’s questionable editorial decisions affected Steven Roberts, an African American scholar, who had published an article about systemic racism in psychology in the same journal under a previous editor (Roberts et al., 2020). Fiedler’s decision to invite three White critical reviewers to submit their criticisms as additional commentaries was perceived by Roberts’ as racially biased. When he made his concerns public, over 1,000 bystanders agreed and signed an open letter asking for Fiedler’s resignation. In contrast, an opposing open letter received much fewer signatures. While some of the signatures on both sides have their own biases because they know Fiedler as a friend or foe, most of the signatures did not know anything about Fiedler, but reacted to Roberts’ description of his treatment. Fiedler never denied that this account was an accurate description of events. He merely claims that his actions were “completely fair, respectful, and in line with journal standards.” Yet, nobody else has supported Fiedler’s claim that it is entirely fair and acceptable to invite three White-ish reviewers to submit their reviews as commentaries and to accept these commentaries without peer-review.
I conducted an informal and unrepresentative poll that confirmed my belief that inviting reviewers to submit a commentary is rare.
What is even more questionable is that all the three reviews support with Hommel’s critical commentary of Robert’s target article. It is not clear why reviews of a commentary were needed to be published as additional commentaries if these reviews agreed with Hommel’s commentary. The main point of reviews is to determine whether a submission is suitable for publication. If Hommel’s commentary was so deficient that all three reviewers were able to make additional points that were missing from his commentary, his submission should have been rejected with or without a chance of resubmission. In short, Fiedler’s actions were highly unusual and questionable, even if they were not racially motivated.
Even if Fiedler thought that his actions were fair and unbiased when he was acting, the response by Roberts, over 1,000 signatories, and the APS board of directors could have made him realize that others viewed his behaviors differently and maybe recognize that his actions were not as fair as he assumed. He could even have apologized for his actions or at least the harm they caused however unintentional. Yet, he chose to blame others for his resignation – “I am the victim”. I believe that Fiedler is indeed a victim, but not in the way he perceives the situation. Rather than blaming others for his disgraceful resignation, he should blame himself. To support my argument, I will propose a mediation model and provide a case-study of Fiedler’s response to criticism as empirical support.
From Arrogance to Humiliation
A well-known biblical proverb states that arrogance is the cause of humiliation (“Hochmut kommt vor dem Fall). I am proposing a median model of this assumed relationship. Fiedler is very familiar with mediation models (Fiedler, Harris, & Schott, 2018). A mediation model is basically a causal chain. I propose that arrogance may lead to humiliation because it breeds ignorance. Figure 1 shows ignorance as the mediator. That is, arrogance makes it more likely that somebody is discounting valid criticism. In turn, individuals may act in ways that are not adaptive or socially acceptable. This leads to either personal harm or a damage to a person’s reputation. Arrogance and ignorance will also shape the response to social rejection. Rather than making an internal attribution that elicits feelings of embarrassed, an emotion that repairs social relationships, arrogant and ignorant individuals will make an external attribution (blame) that leads to anger, an emotion that further harms social relationships.
Fiedler’s claim that his actions were fair and that he is the victim makes it clear that he made an external attribution. He blames others, but the real problem is that Fiedler is unable to recognize when he is wrong and criticism is justified. This attributional bias is well known in psychology and called a self-serving attribution. To enhance one’s self-esteem, some individuals attribute successes to their own abilities and blame others for their failures. I present a case-study of Fiedler’s response to the replication crisis as evidence that his arrogance blinds him to valid criticism.
Replicability and Regression to the Mean
In 2011, social psychology was faced with emerging evidence that many findings, including fundamental findings like unconscious priming, cannot be replicated. A major replication project found that only 25% of social psychology studies produced a significant result again in an attempt to replicate the original study. These findings have triggered numerous explanations for the low replication rate in social psychology (OSC, 2015; Schimmack, 2020; Wiggins & Christopherson, 2019).
Explanations for the replication crisis in social psychology can be divided into two camps. One camp believes that replication failures reveal major problems with the studies that social psychologists conducted for decades. The other camp argues that replication failures are a normal part of science and that published results can be trusted even if they failed to replicate in recent replication studies. A notable difference between these two camps is that defenders of the credibility of social psychology tend to be established and prominent figures in social psychology. As a result, they also tend to be old, men, and White. However, these surface characteristics are only correlated with views about the replication crisis. The main causal factor is likely to be the threat to eminent social psychologists concerns about their reputation and legacy. Rather than becoming famous names along with Allport, their names may be used to warn future generations about the dark days when social psychologists invented theories based on unreliable results.
Consistent with the stereotype of old, White, male social psychologists, Fiedler has become an outspoken critic of the replication movement and tried to normalize replication failures. After the credibility of psychology was challenged in news outlets, the board of the German Psychological Society (DGPs) issued a reassuring (whitewashing) statement that tried to reassure the public that psychology is a science. The web page has been deleted, but a copy of the statement is preserved here (Stellungnahme). This official statement triggered outrage among some members and DGPs created a discussion forum (also deleted now). Fiedler participated in this discussion with the claim that replication failures can be explained by a statistical phenomenon known as regression to the mean. He repeated this argument in an email with a reporter that was shared by Mickey Inzlicht in the International Social Cognition Network group (ISCON) on Facebook. This post elicited many commentaries that were mostly critical of Fiedler’s attempt to cast doubt about the scientific validity of the replication project. The ISCON post and the comments were deleted (when Mickey left Facebook), but they were preserved in my Google inbox. Here is the post and the most notable comments.
Michael Inzlicht shares Fiedler’s response to the outcome of the Reproducibility Project that only 25% of significant results in social psychology could be replicated (i.e., produced a p-value below .05).
August 31 at 9:46am
Klaus Fiedler has granted me permission to share a letter that he wrote to a reported (Bruce Bowers) in response to the replication project. This letter contains Klaus’s words only and the only part I edited was to remove his phone number. I thought this would be of interest to the group.
Dear Bruce:
Thanks for your email. You can call me tomorrow but I guess what I have to say is summarized in this email.
Before I try to tell it like it is, I ask you to please attend to my arguments, not just the final evaluations, which may appear unbalanced. So if you want to include my statement in your article, maybe along with my name, I would be happy not to detach my evaluative judgment from the arguments that in my opinion inevitably lead to my critical evaluation.
First of all I want to make it clear that I have been a big fan of properly conducted replication and validation studies for many years – long before the current hype of what one might call a shallow replication research program. Please note also that one of my own studies has been included in the present replication project; the original findings have been borne out more clearly than in the original study. So there is no self-referent motive for me to be overly critical.
However, I have to say that I am more than disappointed by the present report. In my view, such an expensive, time-consuming, and resource-intensive replication study, which can be expected to receive so much attention and to have such a strong impact on the field and on its public image, should live up (at least) to the same standards of scientific scrutiny as the studies that it evaluates. I’m afraid this is not the case, for the following reasons …
The rationale is to plot the effect size of replication results as a function of original results. Such a plot is necessarily subject to regression toward the mean. On a-priori-grounds, to the extent that the reliability of the original results is less than perfect, it can be expected that replication studies regress toward weaker effect sizes. This is very common knowledge. In a scholarly article one would try to compare the obtained effects to what can be expected from regression alone. The rule is simple and straightforward. Multiply the effect size of the original study (as a deviation score) with the reliability of the original test, and you get the expected replication results (in deviation scores) – as expected from regression alone. The informative question is to what extent the obtained results are weaker than the to-be-expected regressive results.
To be sure, the article’s muteness regarding regression is related to the fact that the reliability was not assessed. This is a huge source of weakness. It has been shown (in a nice recent article by Stanley & Spence, 2014, in PPS) that measurement error and sampling error alone will greatly reduce the replicability of empirical results, even when the hypothesis is completely correct. In order not to be fooled by statistical data, it is therefore of utmost importance to control for measurement error and sampling error. This is the lesson we took from Frank Schmidt (2010). It is also very common wisdom.
The failure to assess the reliability of the dependent measures greatly reduces the interpretation of the results. Some studies may use single measures to assess an effect whereas others may use multiple measures and thereby enhance the reliability, according to a principle well-known since Spearman & Brown. Thus, some of the replication failures may simply reflect the naïve reliance on single-item dependent measures. This is of course a weakness of the original studies, but a weakness different from non-replicability of the theoretically important effect. Indeed, contrary to the notion that researchers perfectly exploit their degrees of freedom and always come up with results that overestimate their true effect size, they often make naïve mistakes.
By the way, this failure to control for reliability might explain the apparent replication advantage of cognitive over social psychology. Social psychologists may simply often rely on singular measure, whereas cognitive psychologists use multi-trial designs resulting in much higher reliability.
The failure to consider reliability refers to the dependent measure. A similar failure to systematically include manipulation checks renders the independent variables equivocal. The so-called Duhem-Quine problem refers to the unwarranted assumption that some experimental manipulation can be equated with the theoretical variable. An independent variable can be operationalized in multiple ways. A manipulation that worked a few years ago need to work now, simply because no manipulation provides a plain manipulation of the theoretical variable proper. It is therefore essential to include a manipulation check, to make sure that the very premise of a study is met, namely a successful manipulation of the theoretical variable. Simply running the same operational procedure as years before is not sufficient, logically.
Last but not least, the sampling rule that underlies the selection of the 100 studies strikes me as hard to tolerate. Replication teams could select their studies from the first 20 articles published in a journal in a year (if I correctly understand this sentence). What might have motivated the replication teams’ choices? Could this procedure be sensitive to their attitude towards particular authors or their research? Could they have selected simply studies with a single dependent measure (implying low reliability)? – I do not want to be too suspicious here but, given the costs of the replication project and the human resources, does this sampling procedure represent the kind of high-quality science the whole project is striving for?
Across all replication studies, power is presupposed to be a pure function of the size of participant samples. The notion of a truly representative design in which tasks and stimuli and context conditions and a number of other boundary conditions are taken into account is not even mentioned (cf. Westfall & Judd).
Comments
Brent W. Roberts, 10:02am Sep 4 This comment just killed me “What might have motivated the replication teams’ choices? Could this procedure be sensitive to Their attitude towards Particular authors or Their research?” Once again, we have an eminent, high powered scientist impugning the integrity of, in this case, close to 300, mostly young researchers. What a great example to set.
Daniel Lakens, 12:32pm Sep 4 I think the regression to the mean comment just means: if you start from an extreme initial observation, there will be regression to the mean.He will agree there is publication bias – but just argues the reduction in effect sizes is nothing unexpected – we all agree with that, I think. I find his other points less convincing – there is data about researchers expectencies about whether a study would replicate. Don’t blabla, look at data. The problem with moderators is not big – original researchers OKéd the studies – if they can not think of moderators, we cannot be blamed for not including others checks. Finally, it looks like our power was good, if you examine the p-curve. Not in line with the idea we messed up. I wonder why, with all commentaries I’ve seen, no one takes the effort to pre-register their criticisms, and then just look at the studies and data, and let us know how much it really matters?
Felix Cheung, ,2:11pm Sep 4 I don’t understand why the regression to mean cannot be understood in a more positive light when the “mean” in regression to the mean refers to the effect sizes of interests. If that’s the case, then regressing to mean would mean that we are providing more accurate estimates of the effect sizes.
Joachim Vandekerckhove, 2:15pm Aug 31 The dismissive “regression to the mean” argument either simply takes publication bias as given or assumes that all effect sizes are truly zero. Either of those assumptions make for an interesting message to broadcast, I feel.
Michael Inzlicht, 2:54pm Aug 31 I think we all agree with this, Jeff, but as Simine suggested, if the study in question is a product of all the multifarious biases we’ve discussed and cannot be replicated (in an honest attempt), what basis do we have to change our beliefs at all? To me the RP–plus lots of other stuff that has come to light in the past few years–make me doubt the evidentiary basis of many findings, and by extension, many theories/models. Theories are based on data…and it turns out that data might not be as solid as we thought.
Jeff Sherman, 2:58pm Aug 31 Michael, I don’t disagree. I think RP–plus was an important endeavor. I am sympathetic to Klaus’s lament that the operationalizations of the constructs weren’t directly validated in the replications.
Uli Schimmack, 11:15am Sep 1 This is another example that many psychologists are still trying to maintain the illusion that psychology doesn’t have a replicabiltiy problem. A recurrent argument is that human behavior is complex and influenced by many factors that will produce variation in results across seemingly similar studies. Even if this were true, it would not explain why all original studies find significant effects. If moderators can make effects appear or disappear, there would be an equal number of non-significant results in original and replication studies. If psychologists were really serious about moderating factors, non-significant results would be highly important to understand under what conditions an effect does not occur. The publication of only significant results in psychology (since 1959 Sterling) shows that psychologists are not really serious about moderating factors and that moderators are only invoked post-hoc to explain away failed replications of significant results. Just like Klaus Fiedler’s illusory regression to the mean, these arguments are hollow and only reveal the motivated biases of their proponents to deny a fundamental problem in the way psychologists collect, analyze, and report their research findings. If a 25% replication rate for social psychology is not enough to declare a crisis then psychology is really in a crisis and psychologists provide the best evidence for the validity of Freud’s theory of repression. Has Daniel Kahneman commented on the reproducibility-project results?
Garriy Shteynberg, 10:33pm Sep 7 Again, I agree that there is publication bias and its importance even in a world where all H0 are false (as you show in your last comment). Now, do you see that in that very world, regression to the mean will still occur? Also, in the spirit of the dialogue, try to refrain from claiming what others do not know. I am sure you realize that making such truth claims on very little data is at best severely underpowered.
Uli Schimmack, 10:38pm Sep 7 Garriy Shteynberg Sorry, but I always said that regression to the mean occurs when there is selection bias, but without selection bias it will not occur. That is really the issue here and I am not sure what point you are trying to make. We agree that studies were selected and that low replication rate is a result of this selection and regression to the mean. If you have any other point to make, you have to make it clearer.
Malte Elson, 3:38am Sep 8 Garriy Shteynberg would you maybe try me instead? I followed your example of the perfect discipline with great predictions and without publication bias. What I haven’t figured out is what would cause regression to the mean to only occur in one direction (decreased effect size at replication level). The predictions are equally great at both levels since they are exactly the same. Why would antecedent effect sizes in publications be systematically larger if there was no selection at that level?
Marc Halusic, 12:53pm Sep 1 Even if untold moderators affect the replicability of a study that describes a real effect, it would follow that any researcher who cannot specify the conditions under which an effect will replicate does not understand that effect well enough to interpret it in the discussion section.
Maxim Milyavsky, 11:16am Sep 3 I am not sure whether Klaus meant that regression to mean by itself can explain the failure of replication or regression to mean given a selection bias. I think that without selection bias regression to mean cannot count as an alternative explanation.If it could, every subsequent experiment would yield a smaller effect than the previous one, which sounds like absurd.I assume that Klaus knows that. So, probably he admits that there was a selection bias. Maybe he just wanted to say – it’s nobody’s fault. Nobody played with data, people were just publishing effects that “worked”. Yet, what is sounds puzzling to me is that he does not see any problem in this process.
– Mickey shared some of the responses with Klaus and posted Klaus’s responses to the comment. Several commentators tried to defend Klaus by stating that he would agree with the claim that selection for significance is necessary to see an overall decrease in effect sizes. However, Klaus Fiedler doubles down on the claim that this is not necessary even though the implication would be that effect sizes shrink every time a study is replicated which is “absurd” (Maxim Milyavsk), although even this absurd claim has been made (Schooler, 2011).
Michael Inzlicht, September 2 at 1:08pm
More from Klaus Fiedler. He has asked me to post a response to a sample of the replies I sent him. Again, this is unedited, directly copying and pasting from a note Klaus sent me. (Also not sure if I should post it here or the other, much longer, conversation).
Having read the echo to my earlier comment on the Nosek report, I got the feeling that I should add some more clarifying remarks.
(1) With respect to my complaints about the complete failure to take regressiveness into account, some folks seem to suggest that this problem can be handled simply by increasing the power of the replication study and that power is a sole function of N, the number of participants. Both beliefs are mistaken. Statistical power is not just a function of N, but also depends on treating stimuli as a random factor (cf. recent papers by Westfall & Judd). Power is 1 minus ?, the probability that a theoretical hypothesis, which is true, will be actually borne out in a study. This probability not only depends on N. It also depends on the appropriateness of selected stimuli, task parameters, instructions, boundary conditions etc. Even with 1000 participant per cell, measurement and sampling error can be high, for instance, when a test includes weakly selected items, or not enough items. It is a cardinal mistake to reduce power to N.
(2) The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. This was nicely explained and proven by Furby (1973). We all “learned” that lesson in the first semester, but regression remains a counter-intuitive thing. When you plot effect sizes in the replication studies as a function of effect sizes in the original studies and the correlation between corresponding pairs is < 1, then there will be regression. The replication findings will be weaker than the original ones. One can refrain from assuming that the original findings have been over-estimations. One might represent the data the other way around, plotting the original results as a function of given effects in the replication studies, and one will also see regression. (Note in this connection that Etz’ Bayesian analysis of the replication project also identified quite a few replications that were “too strong”). For a nice illustration of this puzzling phenomenon, you may also want to read the Erev, Wallsten & Budescu (1994) paper, which shows both overconfidence and underconfidence in the same data array.
(3) I’m not saying that regression is easy to understand intuitively (Galton took many years to solve the puzzle). The very fact that people are easily fooled by regression is the reason why controlling for expected regression effects is standard in the kind of research published here. It is almost a prototypical example of what Don Campbell (1996) had in mind when he tried to warn the community from drawing erroneous inferences.
(4) I hope it is needless to repeat that controlling for the reliability of the original studies is essential, because variation in reliability affects the degree of regressiveness. It is particularly important to avoid premature interpretations of seemingly different replication results (e.g., for cognitive and social psychology) that could reflect nothing but unequal reliability.
(5) My critical remark that the replication studies did not include manipulation checks was also met with some spontaneous defensive reactions. Please note that the goal to run so-called “exact” replications (I refrain from discussing this notion here) does not prevent replication researchers from including additional groups supposed to estimate the effectiveness of a manipulation under the current conditions. (Needless to add that a manipulation check must be more than a compliant repetition of the instruction).
(6) Most importantly perhaps, I would like to reinforce my sincere opinion that methodological and ethical norms have to be applied to such an expensive, pretentious and potentially very consequential project even more carefully and strictly than they are applied to ordinary studies. Hardly any one of the 100 target studies could have a similarly strong impact, and call for a similar degree of responsibility, as the present replication project.
Kind regards, Klaus
This response elicited an even more heated discussion. Unfortunately, only some of these comments were mailed to my inbox. I must have made a very negative comment about Klaus Fiedler that elicited a response by Jeff Sherman, the moderator of the group. Eventually, I was banned from the group and created the Psychological Methods Discussion Group. that became the main group for critical discussion of psychological science.
Uli Schimmack, 2:36pm Sep 2 Jeff Sherman The comparison extends to the (in German) official statement regarding the results of the OSF-replication project. It does not mention that publication bias is at least a factor that contributed to the outcome or mentions any initiatives to improve the way psychologists conduct their research. It would be ironic if a social psychologists objects to a comparison that is based on general principles of social behavior. I think I don’t have to mention that the United States of America pride themselves on freedom of expression that even allows Nazis to publish their propaganda which German law does not allow. In contrast, censorship was used by socialist Germany to maintain in power. So, please feel free to censor my post. and send me into Psychological Method exile.
Jeff Sherman, 2:49pm Sep 2 Uli Schimmack I am not censoring the ideas you wish to express. I am saying that opinions expressed on this page must be expressed respectfully. Calling this a freedom of speech issue is a red herring. Ironic, too, given that one impact of trolling and bullying is to cause others to self-censor. I am working on a policy statement. If you find the burden unbearable, you can choose to not participate.
Uli Schimmack, 2:53pm Sep 2 Jeff Sherman Klaus is not even part of this. So, how am I bullying him? Plus, I don’t think Klaus is easily intimidated by my comment. And, as a social psychologist how do you explain that Klaus doubled down when every comment pointed out that he ignores the fact that regression to the mean can only produce a decrease in the average if the original sample was selected to be above the mean?
This discussion led to a letter to the DGPs board by Moritz Heene that expressed outrage about the whitewashing of the replication results in their official statement.
From: Moritz Heene To: Andrea Abele-Brehm, Mario Gollwitzer, & Fritz Strack Subject: DGPS-Stellungnahme zu Replikationsprojekt Date: Wed, 02 Sep 2015
[I suggest to copy and past the German text into DeepL, a powerful translation program]
Sehr geehrte Mitglieder des Vorstandes der DGPS,
Zunächst Dank an Sie für das Bemühen, die Ergebnisse des OSF-Replikationsprojektes der Öffentlichkeit klarer zu machen. Angesichts dieser Stellungnahme der DGPS möchte ich jedoch persönlich meinen Widerspruch dazu ausdrücken, da ich als Mitglied der DGPS durch diese Stellungnahmen in keiner Weise eine ausgewogene Sichtweise ausgedrückt sehe, sie im Gegenteil als sehr einseitig empfinde. Ich sehe diese Stellungnahme vielmehr als einen Euphemismus der Replikationsproblematik in der Psychologie an, um es milde auszudrücken, bin davon enttäuscht und hatte mir mehr erwartet. Meine Kritikpunkte an ihrer Stellungnahme:
1. Zum Argument 68% der Studien seien repliziert worden: Der Test dazu prüft, ob der replizierte Effekte im Konfidenzintervall um den originalen Effekt liegt, ob diese also signifikant voneinander verschieden sind, so die Logik der Autoren. Lassen wir mal großzügig beiseite, dass dies kein Test über die Differenz der Effektgrößen ist, da das Konfidenzintervall um den originalen beobachteten Effekt gelegt wird, nicht um die Differenz. Wesentlicher ist, dass dies ein schlechtes Maß für Replizierbarkeit ist, denn die originalen Effekte sind upward biased (sieht man in dem originalen paper auch), und vergessen wir den publication bias nicht (siehe density distribution der p-Werte im originalen paper). Anzunehmen, dass die originalen Effektgrößen die Populationseffektgrößen sind, ist wirklich eine heroische Annahme, gerade angesichts des positiven bias der originalen Effekte. Nebenbei: In einem offenen Brief von Klaus Fiedler auf Facebook dazu publiziert wurde, wird argumentiert, die Regression zur Mitte habe die im Schnitt geringeren Effektgrößen im OSF-Projekt produziert, könne diesen Effekt erklären. Dieses Argument mag teilweise stimmen, impliziert aber, dass die originalen Effekte extrem (also biased, weil selektiv publiziert wurde) waren, denn genau das ist ja das Charakteristikum dieses Regressionseffektes: Ergebnisse, die in einer ersten Messung extrem waren, “tendieren” in einer zweiten Messung zum Mittelwert. Die Tatsache, dass die originalen Effekte einen deutlichen positiven bias aufweisen, wird in Ihrer Stellungnahme ignoriert, bzw. gar nicht erst erwähnt.
Das Argument der 68%-Replizierbarkeit wird im übrigen auch vom Hauptautor in Antwort auf ihre Stellungnahme ganz offen in ähnlicher Weise kritisiert:
@sTeamTraen @JPdeRuiter @ShuhBillSkee 68% is the most optimistic interpretation possible. W/low power and positive result bias, implausible.
— Brian Nosek (@briannosek@nerdculture.de) (@BrianNosek) September 2, 2015
Kurzum: Sich genau diese Statistik als Unterstützung dafür aus der OSF-Studie herauszusuchen, um der Öffentlichkeit zu erklären, dass in der Psychologie im Grunde alles in Ordnung ist, sehe ich als “cherry picking” von Ergebnissen an.
2. Das Moderatoren-Argument ist letztlich unhaltbar, denn erstens > wurde dies insbesondere im OSF-Projekt 3 intensiv getestet. Das Ergebnis ist u.a. hier zusammengefasst:
Siehe u.a.: In Many Labs 1 and Many Labs 3 (which I reviewed here), different labs followed standardized replication protocols for a series of experiments. In principle, different experimenters, different lab settings, and different subject populations could have led to differences between lab sites. But in analyses of heterogeneity across sites, that was not the result. In ML1, some of the very large and obvious effects (like anchoring) varied a bit in just how large they were (from “kinda big” to “holy shit”). Across both projects, more modest effects were quite consistent. Nowhere was there evidence that interesting effects wink in and out of detectability for substantive reasons linked to sample or setting. Länger findet man es hier zusammengefasst:
The authors put the interpretation so well that I’ll quote them at length here [emphasis added]: A common explanation for the challenges of replicating results across samples and settings is that there are many seen and unseen moderators that qualify the detectability of effects (Cesario, 2014). As such, when differences are observed across study administrations, it is easy to default to the assumption that it must be due to features differing between the samples and settings. Besides time of semester, we tested whether the site of data collection, and the order of administration during the study session moderated the effects. None of these had a substantial impact on any of the investigated effects. This observation is consistent with the first “Many Labs” study (Klein et al., 2014) and is the focus of the second (Klein et al., 2015). The present study provides further evidence against sample and setting differences being a default explanation for variation in replicability. That is not to deny that such variation occurs, just that direct evidence for a given effect is needed to demonstrate that it is a viable explanation. Zweitens schreiben Sie In ihrer Stellungnahme: Solche Befunde zeigen vielmehr, dass psychologische Prozesse oft kontextabhängig sind und ihre Generalisierbarkeit weiter erforscht werden muss. Die Replikation einer amerikanischen Studie erbringt möglicherweise andere Ergebnisse, wenn diese in Deutschland oder in Italien durchgeführt wird (oder umgekehrt). In ähnlicher Weise können sich unterschiedliche Merkmale der Stichprobe (Geschlechteranteil, Alter, Bildungsstand, etc.) auf das Ergebnis auswirken. Diese Kontextabhängigkeit ist kein Zeichen von fehlender Replizierbarkeit, sondern vielmehr ein Zeichen für die Komplexität psychologischer Phänomene und Prozesse. Nein, das zeigen diese neuen Befunde eben nicht, denn dies ist eine (Post-hoc-)Interpretation die durch die im neuen OSF-Projekt erhobenen Moderatoren nicht unterstützt wird, da diese Moderatorenanalysen gar nicht durchgeführt wurden. Die postulierte Kontextabhängigkeit wurde zudem im OSF-Projekt #3 nicht gefunden. Was man zwischen den labs als Variationsquelle fand war schlicht und einfach Stichprobenvariation, wie man sie nun mal in der Statistik erwarten muss. Ich sehe für Ihre Behauptung also gar keine empirische Basis, wie sie doch in einer sich empirisch nennenden Wissenschaft doch vorhanden sein sollte. Was mir als abschließende Aussage in der Stellungnahme deutlich fehlt ist, dass die Psychologie (und gerade die Sozialpsychologie) in Zukunft keine selektiv publizierten und “underpowered studies” mehr akzeptieren sollte. Das hätte den Kern des Problems etwas besser getroffen. Mit freundlichen Grüßen, Moritz Heene
Moritz Heene received the following response from one of the DGPs board members.
From: Mario Gollwitzer To: Moritz Heene Subject: Re: DGPS-Stellungnahme zu Replikationsprojekt Date: Thu, 03 Sep 2015 10:19:28 +0200
Lieber Moritz,
vielen Dank für deine Mail — sie ist eine von vielen Rückmeldungen, die uns auf unsere Pressemitteilung vom Montag hin erreicht hat, und wir finden es sehr gut, dass in der DGPs-Mitgliedschaft dadurchoffenbar eine Diskussion angestoßen wurde. Wir glauben, dass diese Diskussion offen geführt werden sollte; daher haben wir uns entschlossen, zu unserer Pressemitteilung (und der Science-Studie bzw. dem ganzen Replikations-Projekt) eine Art Diskussionsforum auf unserer DGPs-Homepage einzurichten. Wir arbeiten gerade daran, die Seite aufzubauen. Ich fände es gut, wenn auch du dich hier beteiligen würdest, gerne mit deiner kritischen Haltung gegenüber unserer Pressemitteilung.
Deine Argumente kann ich gut nachvollziehen — und ich stimme dir zu, dass die Zahl “68%” nicht einen “Replikationsanteil” wiederspiegelt. Das war eine missverständliche Äußerung.
Aber abgesehen davon war unser Ziel, mit dieser Pressemitteilung den negativen, teilweise hämischen und destruktiven Reaktionen vieler Medien auf die Science-Studie etwas Konstruktives hinzuzufügen bzw. entgegenzusetzen. Keineswegs wollten wir die Ergebnisse der Studie”schönreden” oder eine Botschaft im Sinne von “alles gut, business as usual” verbreiten! Vielmehr wollten wir argumentieren, dass Replikationsversuche wie diese die Chance auf einen Erkenntnisgewinn bieten, die man nutzen sollte. Das ist die konstruktive Botschaft, die wir gerne auch ein bisschen stärker in den Medien vertreten sehen wollen.
Anders als du bin ich allerdings der Überzeugung, dass es durchaus möglich ist, dass die Unterschiede zwischen einer Originalstudie undihren Replikationen durchaus durch eine (unbekannte) Menge (teilweise bekannter, teilweise unbekannter) Moderatorvariablen (und deren Interaktionen) zustande kommen. Auch “Stichprobenvariation” ist nicht anderes als ein Sammelbegriff für solche Moderatoreffekte. Einige dieser Effekte sind für den Erkenntnisgewinn über ein psychologisches Phänomen zentral, andere nicht. Es gilt, die zentralen Effekte besser zu beschreiben und zu erklären. Darin sehe ich auch einen Wert von Replikationen, insbesondere von konzeptuellen Replikationen.
Abgesehen davon bin ich aber mit dir völlig einer Meinung, dass man nicht ausschließen kann, dass einige der nicht-replizierbaren, aber publizierten Effekte — übrigens nicht bloß in der Sozialpsychologie, sondern in allen Disziplinen — falsch Positive sind, für die es eine Reihe von Gründen gibt (selektives Publizieren, fragwürdige Auswertungspraktiken etc.), die hoch problematisch sind. Über diese Dinge wird ja andernorts auch heftig diskutiert. Diese Diskussionwollten wir aber in unserer Pressemitteilung erst einmal beseite lassen und stattdessen speziell auf die neue Science-Studiefokussieren.
Nochmals vielen Dank für deine Email. Solche Reaktionen sind für uns ein wichtiger Spiegel unserer Arbeit.
Herzliche Grüße, Mario
After the DGPs created a discussion forum, Klaus Fiedler, Moritz Heene and I shared our exchange of views openly on this site. The website is no longer available, but Moritz Heene saved a copy. He also shared our contribution on The Winnower.
1), that the notably lower average effect size in the OSF-project are a statistical artifact of regression to the mean,
2) that low reliability contributed to the lower effect sizes in the replication studies.
Response to 1) as noted in Heene’s previous post, Fiedler’s regression to the mean argument (results that were extreme in a first assessment tend to be closer to the mean in a second assessment) implicitly assumes that the original effects were biased; that is, they are extreme estimates of population effect sizes because they were selected for publication. However, Fiedler does not mention the selection of original effects, which leads to a false interpretation of the OSF-results in Fiedler’s commentary:
“(2) The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. … One can refrain from assuming that the original findings have been over-estimations.” (Fiedler)
It is NOT possible to avoid the assumption that original results are inflated estimates because selective publication of results is necessary to account for the notable reduction in observed effect sizes.
a) Fiedler is mistaken when he cites Furby (1973) as evidence that regression to the mean can occur without selection. “The only necessary and sufficient condition for regression (to the mean or toward less pronounced values) is a correlation less than zero. This was nicely explained and proven by Furby (1973)” (Fiedler). It is noteworthy that Furby (1973) explicitly mentions a selection above or below the population mean in his example, when Furby (1973) writes: “Now let us choose a certain aggression level at Time 1 (any level other than the mean)”.
The math behind regression to the mean further illustrates this point. The expected amount of regression to the mean is defined as (1 – r)(mu – M), where r = correlation between first and second measurement, mu: population mean, and M = mean of the selected group (sample at time 1). For example, if r = .80 (thus, less than 1 as assumed by Fiedler) and the observed mean in the selected group (M) equals the population mean (mu) (e.g., M = .40, mu = .40, and M – mu = .40 – .40 = 0), no regression to the mean will occur because (1 – .80)(.40-.40) = .20*0 = 0. Consequently, a correlation less than 1 is not a necessary and sufficient condition for regression to the mean. The effect occurs only if the correlation is less than 1 and the sample mean differs from the population mean. [Actually the mean will decrease even if the correlation is 1, but individual scores will maintain their position relative to other scores]
b) The regression to the mean effect can be positive or negative. If M < mu and r < 1, the second observations would be higher than the first observations, and the trend towards the mean would be positive. On the other hand, if M > mu and r < 1, the regression effect is negative. In the OSF-project, the regression effect was negative, because the average effect size in the replication studies was lower than the average effect size in the original studies. This implies that the observed effects in the original studies overestimated the population effect size (M > mu), which is consistent with publication bias (and possibly p-hacking).
Thus, the lower effect sizes in the replication studies can be explained as a result of publication bias and regression to the mean. The OSF-results make it possible to estimate, how much publication bias inflates observed effect sizes in original studies. We calculated that for social psychology the average effect size fell from Cohen’s d = .6 to d = .2. This shows inflation by 200%. It is therefore not surprising that the replication studies produced so few significant results because the increase in sample size did not compensate for the large decrease in effect sizes.
Regarding Fiedler’s second point 2)
In a regression analysis, the observed regression coefficient (b) for an observed measure with measurement error is a function of the true relationship (bT) and an inverse function of the amount of measurement error (1 – error = reliability; Rel(X)):
(Interested readers can obtain the mathematical proof from Dr. Heene).
The formula implies that an observed regression coefficient (and other observed effect sizes) is always smaller than the true coefficient that could have been obtained with a perfectly reliable measure, when the reliability of the measure is less than 1. As noted by Dr. Fiedler, unreliability of measures will reduce the statistical power to obtain a statistically significant result. This statistical argument cannot explain the reduction in effect sizes in the replication studies because unreliability has the same influence on the outcome in the original studies and the replication studies. In short, the unreliability argument does not provide a valid explanation for the low success rate in the OSF-replication project.
REFERENCES Furby, L. (1973). Interpreting regression toward the mean in developmental research. Developmental Psychology, 8(2), 172-179. doi:10.1037/h0034145
On September 5, Klaus Fiedler emailed me to start a personal discussion over email.
From: klaus.fiedler [klaus.fiedler@psychologie.uni-heidelberg.de] Sent: September-05-15 7:17 AM To: Uli Schimmack; kf@psychologie.uni-heidelberg.de Subject: iscon gossip
Dear Uli … auf Deutsch … lieber Uli,
Du weisst vielleicht, dass ich nicht fuer Facebook registriert bin, aber ich kriege gelegentlich von anderen Notizen aus dem Chat geschickt. Du bist der Einzige, dem ich mal kurz schreibe. Du hattest geschrieben, dass meine Kommentare falsch waren und ich deshalb keinerlei Repsekt mehr verdiene.
Du bist ein methodisch motivierter und versierter Kollege, und ich waere daher sehr dankbar, wenn Du mir sagen koenntest, inwiefern meine Punkte nicht zutreffen. Was ist falsch:
— dass es die regression trap gibt? — dass eine state-of-the art Studie der Art Retest = f(Test) für Regression kontrollieren muss? — dass Regression eine Funktion der Reliabilitaet ist? — dass allein ein hohes participant N keineswegs dieses Problem behebt? — dass ein fehlender manipulation check die zentral Praemisse unterminiert, dass die UV ueberhaupt hergestellt wurde? — dass fehlende Kontrolle von measurement + sampling error die Interpretation der Ergebnisse unterminiert?
Oder ist der Punkt, dass scientific scrutiny nicht mehr zaehlt, wenn “junge Leute” fuer eine “gute Sache” kaempfen?
Sorry, die letzte Frage driftet ein bisschen ab ins Polemische. Das war nicht so gemeint. Ich moechte wirklich wissen, warum ich falsch liege, dann wuerde ich das auch gern richtigstellen. Ich habe doch nicht behauptet, dass ich empirische Daten habe, die den Vergleich von kognitiver und sozialer Psychologie erhellen (obwohl es stimmt, dass man den Vergleich nur machen kann, wenn man Reliabilitaet und Effektivitaet der Manipulationen kontrolliert). Was mich motiviert, ist lediglich das Ziel, dass auch Meta-Science (und gerade Meta-Science) denselben strengen Standards unterliegt wie jene Forschung, die sie bewertet (und oft leichtfertig schaedigt).
Was die Sozialpsychologie angeht, so hast Du sicher schon gemerkt, dass ich auch ihr Kritiker bin … Vielleicht koennen wir uns ja mal darueber unterhalten …
Schoene Gruesse aus Heidelberg, Klaus
I responded to this email and asked him directly to comment on selection bias as a reasonable explanation for the low replicability of social psychology results.
Dear Klaus Fiedler,
Moritz Heene and I have written a response to your comments posted on the DGPS website, which is waiting for moderation. I cc Moritz so that he can send you the response (in German), but I will try to answer your question myself.
First, I don’t think it was good that Mickey posted your comments. I think it would have been better to communicate directly with you and have a chance to discuss these issues in an exchange of arguments. It is also unfortunate that I mixed my response to the official DGPSs statement with your comments. I see some similarities, but you expressed a personal opinion and did not use the authority of an official position to speak for all psychologists when many psychologists disagree with the statement, which led to the post-hoc creation of a discussion forum to find out about members’ opinions on this issue.
Now let me answer your question. First, I would like to clarify that we are trying to answer the same question. To me the most important question is why the reproducibility of published results in psychology journals is so low (it is only 8% for social psychology, see my post https://replicationindex.wordpress.com/2015/08/26/predictions-about-replicat ion-success-in-osf-reproducibility-project/ )?
One answer to this question is publication bias. This argument has been made since Sterling (1959). Cohen (1962) estimated the replication rate at 60% based on his analysis of typical effect sizes and sample sizes in Journal of Abnormal and Social Psychology (now JPSP). The 60% estimate has been replicated by Sedlmeier and Giegerenzer (1989). So, with this figure in mind we could have expected that 60 out of 100 randomly selected results in JPSP would replicate. However, the actual success rate for JPSP is much lower. How can we explain this?
For the past five years I have been working on a better method to estimate post-hoc power, starting with my Schimmack (2012) Psych Method paper, followed by publications on my R-Index website. Similar work has been conducted by Simonsohn (p-curve) and Wicherts (puniform) approach. The problem with the 60% estimate is that it uses reported effect sizes which are inflated. After correcting for information, the estimated power for social psychology studies in the OSF-project is only 35%. This still does not explain why only 8% were replicated and I think it is an interesting question how much moderators or mistakes in the replication study explain this discrepancy. However, a low replication rate of 35% is entirely predicted based on the published result after taking power and publication bias into account.
In sum, it is well established and known that selectin of significant results distorts the evidence in the published literature and that this creates a discrepancy between the posted success rate (95%) and the replication rate (let’s say less than 50% to be conservative). I would be surprised if you would disagree with my argument that (a) publication bias is present and (b) that publication bias at least partially contributes to the low rate of successful replications in the OSF-project.
A few days later, I sent a reminder email.
Dear Klaus Fiedler,
I hope you received my email from Saturday in reply to your email “iscon gossip”. It would be nice if you could confirm that you received it and let me know whether you are planning to respond to it.
Best regards, Uli Schimmack
Klaus Fiedler responds without answering my question about the fact that regression to the mean can only explain a decrease in the mean effect sizes if the original values were inflated by selection for significance.
Hi:
as soon as my time permits, I will have a look. Just a general remark in response to your email, I do not undersatand what argument applies to my critical evaluation of the Nosek report. What you are telling me in the email does not apply to my critique.
Or do you contest that
a state-of the art study of retest = f(original test) has to tackle the regression beast
reliability of the dependent measure has to be controlled
manipulation check is crucial to assess the effective variation of the independent variable
the sampling of studies was suboptimal
If you disagree, I wonder if there is any common ground in scientific methodology.
I am not sure if I want to contribute to Facebook debates … As you can see, the distance from a scientitic argument to personal attacks is so short that I do not believe in the value of such a forum
Kind regards, Klaus
P.S. If I have a chance to read what you have posted, I may send a reply to the DPGs. By the way, I just sent my comments to Andrea Abele Brehm. I did not ask her to publicize it. But that’s OK
As in a chess game, I am pressing my advantage – Klaus Fiedler is clearly alone and wrong with his immaculate regression argument – in a follow up email.
Dear Klaus Fiedler,
I am waiting for a longer response from you, but to answer your question I find it hard to see how my comments are irrelevant as they are challenge direct quotes from your response.
My main concern is that you appear to neglect the fact that regression to the mean can only occur when selection occurred in the original set of studies.
Moritz Heene and I responded to this claim and find that it is invalid. If the original studies were not a selection of studies, the average mean should be an estimate of the average population mean and there would be no reason to expect a dramatic decrease in effect size in the OSF replication studies. Let’s just focus on this crucial point.
You can either maintain that selection is not necessary and try to explain how regression to the mean can occur without selection or you can concede that selection is necessary and explain how the OSF replication study should have taken selection into account. At a minimum, it would be interesting to hear your response to our quote of Furby (1973) that shows he assumed selection, while you cite Furby as evidence that selection is not necessary.
Although we may not be able to settle all disputes, we should be able to determine whether Furby assumed selection or not.
Here are my specific responses to your questions.
– a state-of the art study of retest = f(original test) has to tackle the regression beast [we can say that it tackeled it by examining how much selection contributed to the original results by seeing how much means regressed towards a lower mean of population effect sizes.
Result: there was a lot of selection and a lot of regression.
– reliability of the dependent measure has to be controlled
in a project that aims to replicate original studies exactly, reliability is determined by the methods of the original study
– manipulation check is crucial to assess the effective variation of the independent variable
sure, we can question how good the replication studies were, but adding additional manipulation checks might also introduce concerns that the study is not an exact replication. Nobody is claiming that the replication studies are conclusive, but no study can assure that it was a perfect study.
– the sampling of studies was suboptimal
how so? The year was selected at random. To take the first studies in a year was also random. Moreover it is possible to examine whether the results are representative of other studies in the same journals and they are; see my blog
You may decide that my responses are not satisfactory, but I would hope that you answer at least one of my questions: Do you maintain that the OSF-results could have been obtained without selection of results that overestimate the true population effect sizes (a lot)?
Sincerely,
Uli Schimmack
Moritz Heene comments.
Thanks, Uli! Don’t let them get away by tactically ignoring these facts. BTW, since we share the same scientific rigor, as far as I can see, we could ponder about a possible collaboration study. Just an idea. [This led to the statistical examination of Kahneman’s book Thinking: Fast and Slow]
Regards, Moritz
Too busy to really think about the possibility that he might have been wrong, Fiedler sends a terse response.
Klaus Fiedler
Very briefly … in a mad rush this morning: This is not true. A necessary and sufficient condition for regression is r < 1. So if the correlation between the original results and the replications is less than unity, there will be regression. Draw a scatter plot and you will easily see. An appropriate reference is Furby (1973 or 1974).
I try to clarify the issue in another attempt.
Dear Klaus Fiedler,
The question is what you mean by regression. We are talking about the mean at time 1 and time 2.
Of course, there will be regression of individual scores, but we are interested in the mean effect size in social psychology (which also determines power and percentage of significant results given equal N).
It is simply NOT true that the mean will change systematically unless there is systematic selection of observations.
As regression to the mean is defined by (1- r) * (mu – M), the formula implies that a selection effect (mu – M unequal 0) is necessary. Otherwise the whole term becomes 0.
There are three ways to explain mean differences between two sets of exact replication studies. The original set was selected to produce significant results. The replication studies are crappy and failed to reproduce the same conditions. Random sampling error (which can be excluded because the difference in OSF is highly significant).
In the case of the OSF replication studies, selection occurred because the published results were selected to be significant from a larger set of results with non-significant results.
If you see another explanation, it would be really helpful if you would elaborate on your theory.
Sincerely, Uli Schimmack
Moritz Heene joins the email exchange and makes a clear case that Fiedler’s claims are statistically wrong.
Dear Klaus Fiedler, dear Uli,
Just to add another clarification:
Once again, Furby (1973, p.173, see attached file) explicitly mentioned selection: “Now let us choose a certain aggression level at Time 1 (any level other than the mean) and call it x’ “.
Furthermore, regression to the mean is defined by (1- r)*(mu – M). See Shepard and Finison (1983, p.308, eq. [1]): “The term in square brackets, the product of two factors, is the estimated reduction in BP [blood pressure] due to regression.”
Now let us fix terms:
Definition of necessity and sufficiency
Necessity: ~p –> ~q , with “~” denoting negation
So, if r is not smaller than 1 than regression to the mean does not occur.
This is true as can be verified by the formula.
Sufficiency: p –> q
So, if r is smaller than 1 than regression to the mean does occur. This is not true as can be verified by the formula as explained in our reply on https://www.dgps.de/index.php?id=2000735#c2001225 and in Ulrich’s previous email.
Sincerely,
Moritz Heene
I sent another email to Klaus to see whether he is going to respond.
Lieber Dr. Fiedler,
Kann ich noch auf eine Antwort von Ihnen warten oder soll ich annehmen dass Sie sich entschieden haben nicht auf meine Anfrage zu antworten?
LG, Uli Schimmack
Klaus Fiedler does respond.
Dear Ullrich:
Yes, I was indeed very, very busy over two weeks, working for the Humboldt foundation, for two conferences where I had to play leading roles, the Leopoldina Academy, and many other urgent jobs. Sorry but this is simply so. I now received your email reminder to send you my comments to what you and Moritz Heene have written. However, it looks like you have already committed yourself publicly (I was sent this by colleagues who are busy on facebook): Fiedler was quick to criticize the OSF-project and Brian Nosek for making the mistake to ignore the well-known regression to the mean effect. This silly argument ignores that regression to the mean requires that the initial scores are selected, which is exactly the point of the OSF-replication studies.
Look, this passage shows that there is apparently a deep misunderstanding about the “silly argument”. Let me briefly try to explain once more what my critique of the Science article (not Brian Nosek personally – this is not my style) referred to. At the statistical level, I was simply presupposing that there is common ground on the premise that regressiveness is ubiquitous; it is not contingent on selected initial scores. Take a scatter plot of 100 bi-variate points (jointly distributed in X and Y). If r(X,Y) < 1(disregarding sign), regressing Y on X will result in a regression slope less than 1. The variance of predicted Y scores will be reduced. I very much hope we all agree that this holds for every correlation, not just those in which X is selected. If you don’t believe, I can easily demonstrate it with random (i.e., non-selective vectors x and y). Across the entire set of data pairs, large values of X will be underestimated in Y, and small values of X will be overestimated. By analogy, large original findings can be expected to be much smaller in the replication. However, when we regress X on Y, we can also expect to see that large Y scores (i.e., i.e., strong replication effects) have been weaker in the original. The Bayes factors reported by Alexander Etz in his “Bayesian reproducibility project”, although not explicit about reverse regression, strongly suggest that there are indeed quite a few cases in which replication results have been stronger than the original ones. Etz’ analysis, which nicely illustrates how a much more informative and scientifically better analysis than the one provided by Nosek might look like, also reinforces my point that the report published in Science is very weak. By the way, the conclusions are markedly different from Nosek, showing that most replication studies were equivocal. The link (that you have certainly found yourself) is provided below.
We know since Rulon (1941 or so) and even since Galton (1986 or so) that regression is a tricky thing, and here I get to the normative (as opposed to the statistical, tautological) point of my critique, which is based on the recommendation of such people as Don Campbell, Daniel Kahneman & Amos Tversky, Ido Erev, Tom Wallsten & David Budescu and many others, who have made it clear that the interpretation of retesting or replication studies will be premature and often mistaken, if one does not take the vicissitudes of regression into account. A very nice historical example is Erev, Wallsten & Budescu’s 1994 Psych. Review article on overconfidence. They make it clear you find very strong evidence for both overconfidence and underconfidence in the same data array, when you regress either accuracy on confidence or confidence on accuracy, respectively. Another wonderful demonstration is Moore and Small’s 2008 Psych. Review analysis of several types of self-serving biases.
So, while my statistical point is analytically true (because regression slope with a single predictor is always < 1; I know there can be suppressor effects with slopes > 1 in multiple regression), my normative point is also well motivated. I wonder if the audience of your Internet allusion to my “silly argument” has a sufficient understanding of the “regression trap” so that, as you write:
Everybody can make up their own mind and decide where they want to stand, but the choices are pretty clear. You can follow Fiedler, Strack, Baumeister, Gilbert, Bargh and continue with business as usual or you can change. History will tell what the right choice will be.
By the way, why you put me in the same pigeon hole as Fritz, Roy, Dan, and John. The role I am playing is completely different and it definitely not aims at business as usual. My very comment on the Nosek article is driven my deep concerns about the lack of scientific scrutiny in such a prominent journal, in which there is apparently no state-of-the-art quality control. A replication project is the canonical case of a scientific interpretation that strongly calls for awareness of the regression trap. That is, the results are only informative if one takes into account what shrinkage of strong effects could be expected by regression alone. Regressiveness imposes an upper limit on the possible replication success, which ought to be considered as a baseline for the presentation of the replication results.
To do that, it is essential to control for reliability. (I know that the reliability of individual scores within a study is not the same as the reliability of the aggregate study results, but they are of course related). I also continue to believe, strongly, that a good replication project ought to control for the successful induction of the independent variable, as evident in a manipulation check (maybe in an extra group), and that the sampling of the 100 studies itself was suboptimal. If Brian Nosek (or others) come up with a convincing interpretation of this replication project, then it is fine. However, the present analysis is definitely not convincing. It is rather a symptom of shallow science.
So, as you can see, the comments that you and Moritz Heene have sent me do not really affect these considerations. And, because there is obviously no common ground between the two of us, not even about the simplest statistical constraints, I have decided not to engage in a public debate with you. I’m afraid hardly anybody in this Facebook cycle will really invest time and work to read the literature necessary to judge the consequences of the regression trap, in order to make an informed judgment. And I do not want to nourish the malicious joy of an audience that apparently likes personal insults and attacks, detached from scientific arguments.
Kind regards, Klaus
P.S. As you can see, I CC this email to myself and to Joachim Krueger, who spontaneously sent me a similar note on the Nosek article and the regression trap.
Am 9/18/2015 um 3:21 PM schrieb Ulrich Schimmack: Lieber Dr. Fiedler,
Kann ich noch auf eine Antwort von Ihnen warten oder soll ich annehmen dass Sie sich entschieden haben nicht auf meine Anfrage zu antworten?
LG, Uli Schimmack
Klaus Fiedler responds
Dear Ullrich:
Yes, I was indeed very, very busy over two weeks, working for the Humboldt foundation, for two conferences where I had to play leading roles, the Leopoldina Academy, and many other urgent jobs. Sorry but this is simply so.
I now received your email reminder to send you my comments to what you and Moritz Heene have written. However, it looks like you have already committed yourself publicly (I was sent this by colleagues who are busy on facebook):
Fiedler was quick to criticize the OSF-project and Brian Nosek for making the mistake to ignore the well-known regression to the mean effect. This silly argument ignores that regression to the mean requires that the initial scores are selected, which is exactly the point of the OSF-replication studies.
Look, this passage shows that there is apparently a deep misunderstanding about the “silly argument”. Let me briefly try to explain once more what my critique of the Science article (not Brian Nosek personally – this is not my style) referred to.
At the statistical level, I was simply presupposing that there is common ground on the premise that regressiveness is ubiquitous; it is not contingent on selected initial scores. Take a scatter plot of 100 bi-variate points (jointly distributed in X and Y). If r(X,Y) < 1(disregarding sign), regressing Y on X will result in a regression slope less than 1. The variance of predicted Y scores will be reduced. I very much hope we all agree that this holds for every correlation, not just those in which X is selected. If you don’t believe, I can easily demonstrate it with random (i.e., non-selective vectors x and y).
Across the entire set of data pairs, large values of X will be underestimated in Y, and small values of X will be overestimated. By analogy, large original findings can be expected to be much smaller in the replication. However, when we regress X on Y, we can also expect to see that large Y scores (i.e., i.e., strong replication effects) have been weaker in the original. The Bayes factors reported by Alexander Etz in his “Bayesian reproducibility project”, although not explicit about reverse regression, strongly suggest that there are indeed quite a few cases in which replication results have been stronger than the original ones. Etz’ analysis, which nicely illustrates how a much more informative and scientifically better analysis than the one provided by Nosek might look like, also reinforces my point that the report published in Science is very weak. By the way, the conclusions are markedly different from Nosek, showing that most replication studies were equivocal. The link (that you have certainly found yourself) is provided below.
We know since Rulon (1941 or so) and even since Galton (1986 or so) that regression is a tricky thing, and here I get to the normative (as opposed to the statistical, tautological) point of my critique, which is based on the recommendation of such people as Don Campbell, Daniel Kahneman & Amos Tversky, Ido Erev, Tom Wallsten & David Budescu and many others, who have made it clear that the interpretation of retesting or replication studies will be premature and often mistaken, if one does not take the vicissitudes of regression into account. A very nice historical example is Erev, Wallsten & Budescu’s 1994 Psych. Review article on overconfidence. They make it clear you find very strong evidence for both overconfidence and underconfidence in the same data array, when you regress either accuracy on confidence or confidence on accuracy, respectively. Another wonderful demonstration is Moore and Small’s 2008 Psych. Review analysis of several types of self-serving biases.
So, while my statistical point is analytically true (because regression slope with a single predictor is always < 1; I know there can be suppressor effects with slopes > 1 in multiple regression), my normative point is also well motivated. I wonder if the audience of your Internet allusion to my “silly argument” has a sufficient understanding of the “regression trap” so that, as you write:
Everybody can make up their own mind and decide where they want to stand, but the choices are pretty clear. You can follow Fiedler, Strack, Baumeister, Gilbert, Bargh and continue with business as usual or you can change. History will tell what the right choice will be.
By the way, why you put me in the same pigeon hole as Fritz, Roy, Dan, and John. The role I am playing is completely different and it definitely not aims at business as usual. My very comment on the Nosek article is driven my deep concerns about the lack of scientific scrutiny in such a prominent journal, in which there is apparently no state-of-the-art quality control. A replication project is the canonical case of a scientific interpretation that strongly calls for awareness of the regression trap. That is, the results are only informative if one takes into account what shrinkage of strong effects could be expected by regression alone. Regressiveness imposes an upper limit on the possible replication success, which ought to be considered as a baseline for the presentation of the replication results.
To do that, it is essential to control for reliability. (I know that the reliability of individual scores within a study is not the same as the reliability of the aggregate study results, but they are of course related). I also continue to believe, strongly, that a good replication project ought to control for the successful induction of the independent variable, as evident in a manipulation check (maybe in an extra group), and that the sampling of the 100 studies itself was suboptimal. If Brian Nosek (or others) come up with a convincing interpretation of this replication project, then it is fine. However, the present analysis is definitely not convincing. It is rather a symptom of shallow science.
So, as you can see, the comments that you and Moritz Heene have sent me do not really affect these considerations. And, because there is obviously no common ground between the two of us, not even about the simplest statistical constraints, I have decided not to engage in a public debate with you. I’m afraid hardly anybody in this Facebook cycle will really invest time and work to read the literature necessary to judge the consequences of the regression trap, in order to make an informed judgment. And I do not want to nourish the malicious joy of an audience that apparently likes personal insults and attacks, detached from scientific arguments.
Kind regards, Klaus
P.S. As you can see, I CC this email to myself and to Joachim Krueger, who spontaneously sent me a similar note on the Nosek article and the regression trap.
I made another attempt to talk about selection bias and ended pretty much with a simple yes/no question as a prosecutor asking a hostile witness.
Dear Klaus,
I don’t understand why we cannot even agree about the question that regression to the mean is supposed to answer.
Moritz Heene and I are talking about the mean difference in effect sizes (the intercept, not the slope, in a regression). According to the Science article, the effect sizes in the replication studies were, on average, 50% lower than the effect sizes in the original studies. My own analysis for social psychology show a difference of d = .6 and d = .2, which suggests results published in original articles are inflated by 200%. Do you believe that regression to the mean can explain this finding? Again, this is not a question about the slope, so please try to provide an explanation that can account for mean differences in effect sizes.
Of course, you can just say that we know that a published significant result is inflated by publication bias. After all, power is never 100% so if you select 100% significant results for publication, you cannot expect 100% successful replications. The percentage that you can expect is determined by the true power of the set of studies (this has nothing to do with regression to the mean, it is simply power + publication bias. However, the OSF-reproducibility project did take power into account and increased sample sizes to account for the problem. They are also aware that the replication studies will not produce 100% successes if the replication studies were planned with 90% power.
The problem that I see with the OSF-project is that they were naïve to use the observed effect sizes to conduct their power analyses. As these effect sizes were strongly inflated by publication bias, the true power was much lower than they thought it would be. For social psychology, I calculated the true power of the original studies to be only 35%. Increasing sample sizes from 90 to 120 does not make much of a difference with power this low. If your point is simply to say that the replication studies were underpowered to reject the null-hypothesis, I agree with you. But the reason for the low power is that reported results in the literature are not credible and strongly influenced by bias. Published effect sizes in social psychology are, on average, 1/3 real and 2/3 bias. Good luck finding the false positive results with evidence like this.
Do you disagree with any of my arguments about power, publication bias, and the implication that social psychological results lack credibility?
Best regards,
Uli
Klaus Fiedler’s response continues to evade the topic of selection bias that undermines the credibility of published results with a replication rate of 25%, but he acknowledges for the first time that regression works in both directions and cannot explain mean changes without selection bias..
Dear Uli, Moritz and Krueger:
I’m afraid it’s getting very basic now … we are talking about problems which are not really there … very briefly, just for the sake of politeness
First, as already clarified in my letter to Uli yesterday, nobody will come to doubt that every correlation < 1 will produce regression in both directions. The scatter plot does not have to be somehow selected. Let’s talk about (or simulate) a bi-variate random sample. Given r < 1, if you plot Y as a function of X (i.e., “given” X values), the regression curve will have a slope < 1, that is, Y values corresponding to high X values will be smaller and Y values corresponding to low X values will be higher. In one word, the variance in Y predictions (in what can be expected in Y) will shrink. If you regress X on Y, the opposite will be the case in the same data set. That’s the truism that I am referring to.
Of course, regression is always a conditional phenomenon. Assuming a regression of Y on X: If X is (very) high, the predicted Y analogue is (much) lower. If X is (very) low, the predicted Y analogue is (much) higher. But this conditional IF phrase does not imply any selectivity. The entire sample is drawn randomly. By plotting Y as a function of given X levels (contaminated with error and unreliability), you conditionalize Y values on (too) high or (too) low X values. But this is always the case with regression.
If I correctly understand the point, you simply equate the term “selective” with “conditional on” or “given”. But all this is common sense, or isn’t it. If you believe you have found a mathematical or Monte-Carlo proof that a correlation (in a bivariate distribution) is 1 and there is no regression (in the scatter plot), then you can probably make a very surprising contribution to statistics and numerical mathematics.
Of course, regression a multiplicative function of unreliability and extremity. So points have to be extreme to be regressive. But I am talking about the entire distribution …
Best, Klaus
… who is now going back to work, sorry.
At this point, Moritz Heene is willing to let it go. There is really no point in arguing with a dickhead – a slightly wrong translation of the German term “Dickkopf” (bull-headed, stubborn).
Lieber Uli,
Sorry, schnell auf Deutsch: Angesichts der Email unten von Fiedler sehe ich es als “fruitless endeavour” an, da noch weiter zu diskutieren. Er geht auf unsere -formal korrekten!- Argumente überhaupt nicht ein und mittlerweile ist er schon bei “Ihr seid es gar nicht wert, dass ich mit Euch diskutiere” angekommen. Auch, dass er Ferby (1973) nachweislich falsch zitiert, ist ihm keine Erwähnung wert. Ich diskutiere das nun nicht mehr mit ihm, weil er es einfach nicht einsehen will und daher unsere mathematisch korrekten Argumente einfach nicht mehr erwähnt (tactical ignorance).
Eines der großen Probleme der Psychologie ist, dass die Probleme grauenhaft basal zu widerlegen sind. Bspw. ist das “hidden-moderatorArgument” am Stammtisch mit 1.3 Promille noch zu widerlegen. Taucht aber leider in Artikeln von Strack und Stroebe und anderen immer wieder auf.
I agreed with him and decided to write a blog post about this fruitless discussion. I didn’t until now, when the PoPS scandal reminded me of Fiedler’s “I am never wrong” attitude.
Hallo Moritz,
Ja Diskussion ist zu Ende. Nun werde ich ein blog mit den emails schreiben um zu zeigen mit welchen schadenfeinigen (? Ist das wirklich ein Wort) Argumenten gearbeitet wird.
Null Respekt fuer Klaus Fiedler.
LG, Uli
I communicated our decision to end the discussion to Klaus Fiedler in a final email.
Dear Klaus,
Last email from me to you.
It is sad that you are not even trying to answer my questions about the results of the reproducibility project.
I also going back to work now, where my work is to save psychology from psychologists like you who continue to deny that psychology has been facing a crisis for 50 years, make some quick bogus statistical arguments to undermine the credibility of the OSF-reproducibility project, and then go back to work as usual.
History will decide who wins this argument.
Disappointed (which implies that I had expected more for you when I started this attempt at a scientific discussion), Uli
Klaus Fiedler replied with his last email.
Dear Uli:
no, sorry that is not my intention … and not my position. I would like to share with you my thoughts about reproducibility … and I am not at all happy with the (kernel of truth) of the Nosek report. However, I believe the problems are quite different from those focused in the current debate, and in the premature consequences drawn by Nosek, Simonsohn, an others. You may have noticed that I have published a number of relevant articles, arguing that what we are lacking is not better statistics and larger subject samples but a better methodology more broader. Why should we two (including Moritz and Joachim and others) not share our thoughts, and I would also be willing to read your papers. Sure. For the moment, we have been only debating about my critique of the Nosek report. My point was that in such a report of replications plotted against originals,
an informed interpretation is not possible unless one takes regression into acount
one has to control for reliability as a crucial moderator
one has to consider manipulation checks
one has to contemplate sampling of studies
Our “debate” about 2+2=4 (I agree that’s what it was) does not affect this critique. I do not believe that I am at variance with your mathematical sketch, but it does not undo the fact that in a bivariate distribution of 100 bivariate points, the devil is lurking in the regression trap.
So please distinguish between the two points: (a) Nosek’s report does not live up to appropriate standards; but (b) I am not unwilling to share with you my thoughts about replicability. (By the way, I met Ioannidis some weeks ago and I never saw as clearly as now that he, like Fanelli, whom I also met, believe that all behavioral science is unreliable and invalid)
Kind regards, Klaus
More Gaslighting about the Replication Crisis by Klaus Fiedler
Klaus Fiedler and Norbert Schwarz are both German-born influential social psychologists. Norbert Schwarz migrated to the United States but continued to collaborate with German social psychologists like Fritz Strack. Klaus Fiedler and Norbert Schwarz have only one peer-reviewed joined publication titled “Questionable Research Practices Revisited” This article is based on John, Loewenstein, & Prelec’s (2012) influential article that coined the term “questionable research practices” In the original article, John et al. (2012) conducted a survey and found that many researchers admit that they used QRPs and also found these practices were acceptable (i.e., not a violation of ethical norms about scientific integrity). John et al.’s (2012) results provide a simple explanation for the outcome of the reproducibility project. Researchers use QRPs to get statistically significant results in studies with low statistical power. This leads to an inflation of effect sizes. When these studies are replicated WITHOUT QRPs, effect sizes are closer to the real effect sizes and lower than the inflated estimates in replications. As a result, the average effect size shrinks and the percentage of significant results decreases. All of this was clear, when Moritz Heene and I debated with Fiedler.
Fiedler and Schwarz’s article had one purpose, namely to argue that John et al.’s (2012) article did not provide credible evidence for the use of QRPs. The article does not make any connection between the use of QRPs and the outcome of the reproducibility project.
The resulting prevalence estimates are lower by order of magnitudes. We conclude that inflated prevalence estimates, due to problematic interpretation of survey data, can create a descriptive norm (QRP is normal) that can counteract the injunctive norm to minimize QRPs and unwantedly damage the image of behavioral sciences, which are essential to dealing with many societal problems” (Fiedler & Schwarz, 2016, p. 45).
Indeed, the article has been cited to claim that “questionable research practices” are not always questionable and that “QRPs may be perfectly acceptable given a suitable context and verifiable justification (Fiedler & Schwarz, 2016; …) (Rubin & Dunkin, 2022).
To be clear what this means. Rubin and Dunkin claim that it is perfectly acceptable to run multiple studies and publish only those that worked, drop observations to increase effect sizes, and to switch outcome variables after looking at the results. No student will agree that these practices are scientific or trust results based on such practices. However, Fiedler and other social psychologists want to believe that they did nothing wrong when they engaged in these practices to publish.
Fiedler triples down on Immaculate Regression
I assumed everybody had moved on from the heated debates in the wake of the reproducibility project, but I was wrong. Only a week ago, I discovered an article by Klaus Fiedler – with a co-author with one of his students that repeats the regression trap claims in an English-language peer-reviewed journal with the title “The Regression Trap and Other Pitfalls of Replication Science—Illustrated by the Report of the Open Science Collaboration” (Fiedler & Prager, 2018).
ABSTRACT The Open Science Collaboration’s 2015 report suggests that replication effect sizes in psychology are modest. However, closer inspection reveals serious problems.
A more general aim of our critical note, beyond the evaluation of the OSC report, is to emphasize the need to enhance the methodology of the current wave of simplistic replication science.
Moreover, there is little evidence for an interpretation in terms of insufficient statistical power.
Again, it is sufficient to assume a random variable of positive and negative deviations (from the overall mean) in different study domains or ecologies, analogous to deviations of high and low individual IQ scores. One need not attribute such deviations to “biased” or unfair measurement procedures, questionable practices, or researcher expectancies.
Yet, when concentrating on a domain with positive deviation scores (like gifted students), it is permissible—though misleading and unfortunate—to refer to a “positive bias” in a technical sense, to denote the domain-specific enhancement.
Depending on the selectivity and one- sided distribution of deviation scores in all these domains, domain-specific regression effects can be expected.
How about the domain of replication science? Just as psychopathology research produces overall upward regression, such that patients starting in a crisis or a period of severe suffering (typically a necessity for psychiatric diagnoses) are better off in a retest, even without therapy (Campbell, 1996), research on scientific findings must be subject to an opposite, downward regression effect. Unlike patients representing negative deviations from normality, scientific studies published in highly selective journals constitute a domain of positive deviations, of well-done empirical demonstrations that have undergone multiple checkson validity and a very strict review process. In other words, the domain of replication science, major empirical findings, is inherently selective. It represents a selection of the most convincing demonstrations of obtained effect sizes that should exceed most everyday empirical observations. Note once more that the emphasis here is not on invalid effects or outliers but on valid and impressive effects, which are, however, naturally contaminated with overestimation error (cf. Figure 2).
The domain-specific overestimation that characterizes all science is by no means caused by publication bias alone. [!!!!! the addition of alone here is the first implicit acknowledgement that publication bias contributes to the regression effect!!!!]
To summarize, it is a moot point to speculate about the reasons for more or less successful replications as long as no evidence is available about the reliability of measures and the effectiveness of manipulations.
In the absence of any information about the internal and external validity (Campbell, 1957) of both studies, there is no logical justification to attribute failed replications to the weakness of scientific hypotheses or to engage in speculations about predictors of replication success.
A recent simulation study by Stanley and Spence (2014) highlights this point, showing that measurement error and sampling error alone (Schmidt, 2010) can greatly reduce the replication success of empirical tests of correct hypotheses in studies that are not underpowered.
Our critical comments on the OSC report highlight the conclusion that the development of such a methodology is sorely needed.
Final Conclusion
Fiedler’s illusory regression account of the replication crisis was known to me since 2015. It was not part of the official record. However, his articles with Schwarz in 2016 and Prager in 2018 are part of his official CV. The articles show a clear motivated bias against Open Science and the reforms initiated by social psychologists to fix their science. He was fired because he demonstrated the same arrogant dickheadedness in interactions with a Black scholar. Does this mean he is a racist? No, he also treats White colleagues with the same arrogance, yet when he treated Roberts like this he abused his position as gate-keeper at an influential journal. I think APS made the right decision to fire him, but they were wrong to hire him in the first place. The past editors of PoPS have shown that old White eminent psychologists are unable to navigate the paradigm shift in psychology towards credibility, transparency, and inclusivity. I hope APS will learn a lesson from the reputational damage caused by Fiedler’s actions and search for a better editor that represents the values of contemporary psychologists.
P.S. This blog post is about Klaus Fiedler, the public figure and his role in psychological science. It has nothing to do with the human being.
P.P.S I also share the experience of being forced from an editorial position with Klaus. I was co-founding editor of Meta-Psychology and made some controversial comments about another journal that led to a negative response. To save the new journal, I resigned. It was for the better and Rickard Carlsson is doing a much better job alone than we could have done together. It hurt a little, but live goes on. Reputations are not made by a single incidence, especially if you can admit to mistakes.
The journal Psychological Inquire publishes theoretical articles that are accompanied by commentaries. In a recent issue, prominent implicit cognition researchers discussed the meaning of the term implicit. This blog post differs from the commentaries by researchers in the field, by providing an outsider perspective and by focusing on the importance of communicating research findings clearly to the general public. This purpose of definitions was largely ignored by researchers who are more focused on communicating with each other than with the general public. I will show that this unique outsider perspective favors a definition of implicit bias in terms of the actual research that has been conducted under the umbrella of implicit social cognition research rather than proposing a definition that renders 30 years of research useless with a simple stroke of a pen. If social cognition researchers want to communicate about implicit bias as empirical scientists they have to define implicit bias as effects of automatically activated information (associations, stereotypes, attitudes) on behavior. This is what they have studied for 30 years. Defining implicit bias as unconscious bias is not helpful because 30 years of research have failed to provide any evidence that people can act in a biased way without awareness. Although unconscious biases may occur, there is currently no scientific evidence to inform the public about unconscious biases. While the existing research on automatically activated stereotypes and attitudes has problems, the topic remains important. As the term implicit bias has caught on, it can be used in communications with the public about, but it should be made clear that implicit does not mean unconscious.
Introduction
Psychologists are notoriously sloppy with language. This leads to misunderstandings and unnecessary conflicts among scientists. However, the bigger problem is a break-down in communication with the general public. This is particularly problematic in social psychology because research on social issues can influence public discourse and ultimately policy decisions.
One of the biggest case-studies of conceptual confusion that had serious real-world consequences is the research on implicit cognition that created the popular concept of implicit bias. Although the term implicit bias is widely used to talk about racism, the term lacks clear meaning.
The Stanford Encyclopedia of Philosophy defines implicit bias as a tendency to “act on the basis of prejudice and stereotypes without intending to do so.” However, lack of intention (not wanting to) is only one of several meanings of the term implicit. Another meaning of the word implicit is automatic activation of thoughts. For example, a Scientific American article describes implicit bias as a “tendency for stereotype-confirming thoughts to pass spontaneously through our minds.” Notably, this definition of implicit bias clearly implies that people are aware of the activated stereotype. The stereotype-confirming thought is in people’s mind and not activated in some other area of the brain that is not accessible to consciousness. This definition also does not imply that implicit bias results in biased behavior because awareness makes it possible to control the influence of activated stereotypes on behavior.
Merriam Webster Dictionary offers another definition of implicit bias as “a bias or prejudice that is present but not consciously held or recognized.” In contrast to the first two meanings of implicit bias, this definition suggests that implicit bias may occur without awareness; that is implicit bias = unconscious bias.
The different definitions of implicit bias lead to very different explanations of biased behavior. One explanation assumes that implicit biases can be activated and guide behavior without awareness and individuals who act in a biased way may either fail to recognize their biases or make up some false explanation for their biased behaviors after the fact. This idea is akin to Freud’s notion of a powerful, autonomous unconscious (the Id) that can have subversive effects on behavior that contradict the values of a conscious, moral self (Super-Ego). Given the persistent influence of Freud on contemporary culture, this idea of implicit bias is popular and reinforced by the Project Implicit website that offers visitors tests to explore their hidden (hidden = unconscious) biases.
The alternative interpretation of implicit bias is less mysterious and more mundane. It means that our brain constantly retrieves information from memory that is related to the situation we are in. This process does not have a filter to retrieve only information that we want. As a result, we sometimes have unwanted thoughts. For example, even individuals who do not want to be prejudice will sometimes have unwanted stereotypes and associated negative feelings pop into their mind (Scientific American). No psychoanalysis or implicit test is needed to notice that our memory has stored stereotypes. In safe contexts, we may even laugh about them (Family Guy). In theory, awareness that a stereotype was activated also makes it possible to make sure that it does not influence behavior. This may even be the main reason for our ability to notice what our brain is doing. Rather than acting in a reflexive way to a situation, awareness makes it possible to respond more flexible to a situation. When implicit is defined as automatic activation of a thought, the distinction between implicit and explicit bias becomes minor and academic because the processes that retrieve information information from memory are automatic. The only difference between implicit and explicit retrieval of information is that the process may be triggered spontaneously by something in our environment or by a deliberate search for information.
After more than 30 years of research on implicit cognitions (Fazio, Sanbonmatsu, Powell, Kardes, 1986), implicit social cognition researchers increasingly recognize the need for clearer definitions of the term implicit (Gawronski, Ledgerwood, & Eastwick, 20222a), but there is little evidence that they can agree on a definition (Gawronski, Ledgerwood, & Eastwick, 20222b). Gawronski et al. (2022a, 2022b) propose to limit the meaning of implicit bias to unconscious biases; that is, individuals are unaware that their behavior was influenced by activation of negative stereotypes or affects/attitudes. “instances of bias can be described as implicit if respondents are unaware of the effect of social category cues on their behavioral response” (p. 140). I argue that this definition is problematic because there is no scientific evidence to support the hypothesis that prejudice is unconscious. Thus, the term cannot be used to communicate scientific results that have been obtained by implicit cognition researchers over the past three decades because these studies did not study unconscious bias.
Implicit Bias Is Not Unconscious Bias
Gawronski et al. note that their decision to limit the term implicit to mean unconscious is arbitrary. “A potential objection against our arguments might be that they are based on a particular interpretation of implicit in IB that treats the term as synonymous with unconscious” (p. 145). Gawronski et al. argue in favor of their definition because “unconscious biases have the potential to cause social harm in ways that are fundamentally different from conscious biases that are unintentional and hard-to-control” (p. 146). The key words in this argument is “have the potential,” which means that there is no scientific evidence that shows different effects of biases with and without awareness of bias. Thus, the distinction is merely a theoretical, academic one without actual real-world implications. Gawronski et al. agree with this assessment when they point out that existing implicit cognition research “provides no information about IB [implicit bias] if IB is understood as an unconscious effect of social category cues on behavioral responses. It seems bizarre to define the term implicit bias in a way that makes all of the existing implicit cognition research irrelevant. A more reasonable approach would be to define implicit bias in a way that is more consistent with the work of implicit bias researchers. As several commentators pointed out, the most widely used meaning of implicit is automatic activation of information stored in memory about social groups. In fact, Gawronski himself used the term implicit in this sense and repeatedly pointed out that implicit does not mean unconscious (i.e., without awareness) (Appendix 1).
Defining the term implicit as automatic activation makes sense because the standard experimental procedure to study implicit cognition is based on presenting stimuli (words, faces, names) related to a specific group and to examine how these stimuli influence behaviors such as the speed of pressing a button on a keyboard. The activation of stereotypic information is automatic because participants are not told to attend to these stimuli or even to ignore them. Sometimes the stimuli are also presented in subtle ways to make it less likely that participants consciously attend to them. The question is always whether these stimuli activate stereotypes and attitudes stored in memory and how activation of this information influences behavior. If behavior is influenced by the stimuli, it suggests that stereotypic information was activated – with or without awareness. The evidence from studies like these provides the scientific basis for claims about implicit bias. Thus, implicit bias is basically operationally defined as systematic effects of automatically activated information about groups on behavior.
The aim of implicit bias research is to study real-word incidences of prejudice under controlled laboratory conditions. A recent incidence at racism shows how activation of stereotypes can have harmful consequences for victims and perpetrators of racist behavior .
The question of consciousness is secondary. What is important is how individuals can prevent harmful consequences of prejudice. What can individuals do to avoid storing negative stereotypes and attitudes in the first place? What can individuals do to weaken stored memories and attitudes? What can individuals do to make it less likely that stereotypes are activated? What can individuals do to control the influence of attitudes when they are activated? All of these questions are important and are related to the concept of implicit as automatic activation of attitudes. The only reason to emphasize unconscious process would be a scenario where individuals are unable to control the influence of information that influences behavior without awareness. However, given the lack of evidence that unconscious biases exist, it is currently unnecessary to focus on this scenario. Clearly, many instances of biases occur with awareness (“White teacher in Texas fired after telling students his race is ‘the superior one’”).
Unfortunately, it may be surprising for some readers to learn that implicit does not mean unconscious because the term implicit bias has been popularized in part to make a distinction between well-known forms of bias and prejudice and a new form of bias that can influence behavior even when individuals are consciously trying to be unbiased. These hidden biases occur against individuals’ best intentions because they exist in a blind spot of consciousness. This meaning of implicit bias was popularized by Banaji and Greenwald (2013), who also founded the Project Implicit website that provides individuals with feedback about their hidden biases; akin to psychoanalysts who can recover repressed memories.
Gawronski et al. (2022b) point out that Greenwald and Banaji’s theory of unconscious bias evolved independently of research by other implicit bias researchers who focused on automaticity and were less concerned about the distinction between conscious and unconscious biases. Gawronski’s definition of implicit bias as unconscious bias favors Banaji and Greenwald’s school of thought (hidden bias) over other research programs (automatically activated biases). The problem with this decision is that Greenwald and Banaji recently walked back their claims about unconscious biases and no longer maintain that the effects they studies were obtained without awareness (Implicit = Indirect & Indirect ? Unconscious, Greenwald & Banaji, 2017). The reversal of their theoretical position is evident in their statement that “even though the present authors find themselves occasionally lapsing to use implicit and explicit as if they had conceptual meaning [unconscious vs. conscious], they strongly endorse the empirical understanding of the implicit– explicit distinction” (p. 892). It is puzzling to see Gawronski arguing for a definition that is based on a theory that the authors no longer endorse. Given the lack of scientific evidence that stereotypes regularly lead to biases without awareness, this might be the time to agree on a definition that matches the actual research by implicit cognition researchers, and the most fitting definition would be automatic activation of stereotypes and attitudes, not unconscious causes of behavior.
Gawronski et al. (2022a) also falsely imply that implicit cognition researchers have ignored the distinction between conscious and unconscious biases. In reality, numerous studies have tried to demonstrate that implicit biases can occur without awareness. To study unconscious biases, social cognition researchers have relied heavily on an experimental procedure known as subliminal priming. In a subliminal priming study, a stimulus (prime) is presented very briefly, outside of the focus of attention, and/or with a masking stimuli. If a manipulation check shows that individuals have no awareness of the prime and the prime influences behavior, the effect appears to occur without awareness. Several studies suggested that racial primes can influence behavior without awareness (Bargh et al., 1996; Davis, 1989).
However, the credibility of these results has been demolished by the replication crisis in social psychology (Open Science Collaboration, 2015; Schimmack, 2020). Priming research has been singled out as the field with the biggest replication problems (Kahneman, 2012). When asked to replicate their own findings, leading priming researchers like Bargh refused to do so. Thus, while subliminal priming studies started the implicit revolution (Greenwald & Banaji, 2017), the revolution imploded over the past decade when doubts about the credibility of the original findings increased.
Unfortunately, researchers within the field of implicit bias research often ignore the replication crisis and cite questionable evidence as if it provided solid evidence for unconscious biases. For example, Gawronski et al. (2022b) suggest that unconscious biases may contribute to racial disparities in use-of-force errors such as the high-profile killing of Philando Castile. To make this case, they use a (single) study of 58 White undergraduate students (Correll, Wittenbrink, Crawford, & Sadler, 2015, Study 3). The study asked participants to make shoot vs. no-shoot decisions in a computer task (game) that presented pictures of White or Black men holding a gun or another object. Participants were instructed to make one quick decision within 630 milliseconds and another decision without time restriction. Gawronski et al. suggest that failures to correct an impulsive error given ample time to do so constitutes evidence of unconscious bias. They summarized the results as evidence that “unconscious effects on basic perceptual processes play a major role in tasks that more closely resemble real-world settings” (p. 226).
Fact checking reveals that this characterization of the study and its results is at least misleading, if not outright false. First, it is important to realize that the critical picture was presented for only 175ms and immediately replaced by another picture to wipe out visual memory. Although this is not a strictly subliminal presentation of stimuli, it is clearly a suboptimal presentation of stimuli. As a result, participants sometimes had to guess what the object was. They also had no other information to know whether their initial perception was correct or incorrect. The fact that participants’ performance improved without time pressure may be due to response errors under time pressure and this improvement was evident independent of the race of the men in the picture.
Without time pressure, participants shot 85% of armed Black men and 83% of armed White men. For unarmed men, participants shot 28% Black men and 25% White men. The statistical comparison of these differences showed weak effect of a systematic bias. The comparison for unarmed men produced a p-value that was just significant with the standard criterion of alpha = .05 criterion, F(1,53) = 6.65, p = .013, but not the more stringent criterion of alpha = .005 that is used to predict a high chance of replication. The same is true for the comparison of responses to pictures of unarmed men, F(1,53) = 4.96, p =.031. To my knowledge, this study has not been replicated and Gawronski et al.’s claim rests entirely on this single study.
Even if these effects could be replicated in the laboratory, they do not provide any information about unconscious biases in the real world because the study lacks ecological validity. To make claims about the real world, it is necessary to study police officers in simulations of real world scenarios (Andersen, Di Nota, Boychuk, Schimmack, & Collins, 2021). This research is rare, difficult, and has not yet produced conclusive results. Andersen et al. (2021) found a small racial bias, but the sample was too small to provide meaningful information about the amount of racial bias in the real world. Most important, however, real-word scenarios provide ample information to see whether a suspect is Black or White and is armed or not. The real decision is often whether use of force is warranted or not. Racial biases in these shooting errors are important, but they are not unconscious biases.
Contrary to Gawronski et al., I do not believe that social cognition researchers focus on automatic biases rather than unconscious biases was a mistake. The real mistake was the focus on reaction times in artificial computer tasks rather than studying racial biases in the real world. As a result, thirty years of research on automatic biases has produced little insights into racial biases in the real world. To move the field towards the study of unconscious biases would be a mistake. Instead, social cognition researchers need to focus on outcome variables that matter.
Conclusion
The term implicit bias can have different meanings. Gawronski et al. (2022a) proposed to limit the meaning of the term to unconscious bias. I argue that this definition of implicit bias is not useful because most studies of implicit cognition are studies in which racial stereotypes and attitudes toward stigmatized groups are automatically activated. In contrast, priming studies that tried to distinguish between conscious and unconscious activation of this information have been discredited during the replication crisis and there exists no credible empirical evidence to suggest that unconscious biases exist or contribute to real-world behavior. Thus, funding a new research agenda focusing on unconscious biases may waste resources that are better spent on real-world studies of racial biases. Evidently, this conclusion diverges from the conclusion of implicit cognition researchers who are interested in continuing their laboratory studies, but they have failed to demonstrate that their work makes a meaningful contribution to society. To make research on automatic biases more meaningful, implicit bias research needs to move from artificial outcomes like reaction times on computer tasks to actual behaviors.
Appendix 1
Implicit Cognition Research Focusses on Automatic (Not Unconscious) Processes
Gawronski & Bodenhausen (2006), WOS/11/22 1,537
“If eras of psychological research can be characterized in terms of general ideas, a major theme of the current era is probably the notion of automaticity” (p. 692)
This perspective is also dominant in contemporary research on attitudes, in which deliberate, “explicit” attitudes are often contrasted with automatic, “implicit” attitudes (Greenwald & Banaji, 1995; Petty, Fazio, & Brin˜ol, in press; Wilson, Lindsey, & Schooler, 2000; Wittenbrink & Schwarz, in press).
“We assume that people generally do have some degree of conscious access to their automatic affective reactions and that they tend to rely on these affective reactions in making evaluative judgments (Gawronski, Hofmann, & Wilbur, in press; Schimmack & Crites, 2005) (p. 696).
“The distinction between automatic and controlled processes now occupies a central role in many areas of social psychology and is reflected in contemporary dual-process theories of prejudice and stereotyping (e.g., Devine, 1989)” (p. 469)
“Specifically, we argued that performance on implicit measures is influenced by at least four different processes: the automatic activation of an association (association activation), the ability to determine a correct response (discriminability), the success at overcoming automatically activated associations (overcoming bias), and the influence of response biases that may influence responses in the absence of other available guides to response (guessing)” (p. 482)
Gawronski & DeHouwer (2014), WOS 11/22 240
” other researchers assume that the two kinds of 11lL’asurcs tap into distinct memory representations, such that explicit measures tap into conscious representations whereas implicit measures tap into unconscious representations (e.g., Greenwald & Banaji, 1995). Although the conceptualizations arc relatively common in the literature on implicit measures, we believe that it is concecptually more appropriate to classify different measures in terms of whether the tobe-measured psychological attribute influences participants’ responses on the task in an automatic fashion (De Houwer, Teige-Mocigemba, Spruyt, & Moors, 2009).” (p. 283)
Hofmann, Gawronski, Le, & Schmitt, PSPB, 2005, WoS/11/22
“These [implicit] measures—most of them based on reaction times in response compatibility tasks (cf. De Houwer, 2003)—are intended to assess relatively automatic mental associations that are difficult to gauge with explicit self-report measures”. (p. 1369)
“A common explanation for these findings is that the spontaneous behavior assessed in these studies is difficult to control, and thus more likely to be influenced by automatic evaluations, such as they are reflected in indirect attitude measures” (p. 492)
“there is no empirical evidence that people lack conscious awareness of indirectly assessed attitudes per se” (p. 496)
“The central assumption in this model is that indirect measures provide a proxy for the activation of associations in memory” (p. 187)
Gawronski & LeBel, JESP (2008) WOS/11/22
“We argue that implicit measures provide a proxy for automatic associations in memory, which may or may not influence verbal judgments reflected in self-report measures” (p. 1356)
“Phenomena such as stereotype and attitude activation can be readily reconstructed as instance-based automaticity. For example, perceiving a person of a stereotyped group or an attitude object may be sufficient to activate well-practiced stereotypic or evaluative associations in memory” (p. 386)
Implicit measures are important even if they do not assess unconscious processes.
Hofmann, Gawronski, Le, & Schmitt, PSPB, 2005, WoS/11/22
” Arguably one of the most important contributions in social cognition research within the last decade was the development of implicit measures of attitudes, stereotypes, self-concept, and self-esteem (e.g., Fazio, Jackson, Dunton, & Williams, 1995; Greenwald, McGhee, & Schwartz, 1998; Nosek & Banaji, 2001; Wittenbrink, Judd, & Park, 1997).” (p. 1369)
Gawronski & DeHouwer (2014), WOS 11/22 240
“For the decade to come, we believe that the field would benefit from a stronger focus on underlying mechanisms with regard to the measures themselves as well as their capability to predict behavior (see also Nosek, Hawkins, & Frazier, 2011).” (p. 303)
Post-war American Psychology is rooted in behaviorism. The key assumption of behaviorism is that psychology (i.e., the science of the mind) should only study phenomena that are directly observable. As a result, the science of the mind became the science of behavior. While behaviorism is long dead (see the 1990 funeral here), it’s (harmful) effect on psychology is still noticeable today. One lasting effect is psychologists aversion to make causal attributions to the mind (cognitive processes). While cognitive processes cannot be directly observed with the human senses (we cannot see, touch, smell, or hear what goes on in somebody’s mind), we can indirectly observe these processes on the basis of observable behaviors. A whole different discipline that is called psychometrics has developed elaborate theories and statistical models to relate observed behaviors to unobserved processes in the mind. Unfortunately, psychometrics is often not covered in the education of psychologists. As a result, psychologists often make simple mistakes when they apply psychometric tools to psychological questions.
In the language of psychometrics, observed behaviors are observed variables and unobserved mental processes are unobserved variables that are also often called latent (i.e., of a quality or state) existing but not yet developed or manifest; hidden or concealed) variables. The goal of psychometrics is to find systematic relationships between observed and latent variables that make it possible to study mental processes. We can compare this process to the task of early astronomers to make sense of the lights in the night sky. Bright stars are like observable indicators and the task of astronomers is to explain the behavior of these observable variables with unobserved forces. Astronomy has come a long way from seeing astrological signs in the sky, but psychology is pretty much at this early stage of science, where most of the unknown cognitive processes that cause observable behaviors are unknown. In fact, some psychologists still resist the idea that observable behavior can be explained by latent variables (Borsboom et al., 2021). Others, however, have used psychometric tools, but fail to understand the basic properties of psychometric models (e.g., Digman, 1997; DeYoung & Peterson, 2002; Musek, 2007). Here, I give a simple introduction to the basic logic of psychometric models and illustrate how applied psychologists can get lost in latent variable space.
Figure 2 shows the most basic psychometric model that relates an observed variable to an unobserved cause. I am using a widely used measure of life-satisfaction as an example. Please rate your life on a scale from 0 = worst possible life to 10 = best possible life. Thousands of studies with millions of respondents have used this question to study “the secret of happiness.” Behaviorists would treat this item as a stimulus and participants responses on the 11-point rating scales as behaviors. One problem for behaviorists is that participants will respond differently to the same question. Responses vary from 0 (very rarely) all the way to 10 (more often, but still rare). The modal response in affluent Western countries is 7. Behaviorism has no answer to the question why participants respond differently to the same situation (i.e., question). Some researchers have tried to provide a behavioristic answers by demonstrating that responses can be manipulated (e.g., responses are different in a good or bad mood; Schwarz & Strack, 1999; Kahneman, 2011). However, these effects are small and do not explain why responses are highly stable over time and across different situations (Schimmack & Oishi, 2005). To explain why some people report higher levels of life-satisfaction than others, we have to invoke unobserved causes within respondents’ minds. Just like forces that creates the universe, these causes are not directly observable, but we know they must exist because we observe variation in responses that cannot be explained by variation in the situation (i.e., same situation and different behaviors imply internal causes).
Psychologists have tried to understand the mental processes that produce variation in Cantril ladder scores for nearly 100 years (Andrews & Whitey, 1976; Cantril, 1965; Diener, 1984; Hartmann, 1936). In the 1980s, focus shifted from thoughts about one’s life (e.g., I hate my work, I love my spouse, etc.) to the influence of personality traits (Costa & McCrae, 1980). Just like life-satisfaction, personality is a latent variable that can only be measured indirectly by observing differences in behaviors in the same situation. The most widely used observed variables to do so are self-ratings of personality.
The key problem for the measurement of unobserved mental processes is that variation in observed scores can be caused by many different mental processes. To go beyond the level of observed variation in behaviors, it is necessary to separate the different causes that contribute to the variance in observed scores. The first step is to separate causes that produce measurement error. The most widely used approach to do so is to ask the same or similar questions repeatedly and to consider variability in responses as measurement error. The next figure shows a model for responses to two similar items.
When two or more observed variables are available, it is possible to examine the correlation between two variables. if two observed variables share a common cause, they are going to be correlated. The strength of the correlation depends on the relative strength of the shared mental process and the unique mental processes. Psychometrics works in reverse and makes inferences about the unobserved causes by examining the observed correlations. To do so, it is necessary to make some assumptions, and this is where things can go wrong, when researchers do not understand these assumptions.
A common assumption is that the shared causal processes are important and meaningful, whereas the unique mental processes are unimportant, irrelevant, and error variance. Based on this assumption, the model is often drawn differently. Sometimes, the shared unobserved variable is drawn on top, and the unshared unobserved variables are drawn at the bottom (top = important, bottom = unimportant).
Sometimes, the unique mental processes are drawn smaller and without a name.
And sometimes, they are simply omitted because they are considered unimportant and irrelevant.
The omission of the unshared causes makes sense when psychometricians communicate with each other because they are trained in understanding psychometric models and use figures merely as a short-hand to communicate with each other. However, when psychometricians communicate with psychologists things can go horribly wrong because psychologists may not realize that the omission of residuals is based on assumptions that can be right or wrong. They may simply assume that the unique variances are never important and can always be omitted. However, this is a big mistake with undesirable consequences. To demonstrate this, I am always going to show the unique causes of all variables in the following models.
When psychologists ask similar questions repeatedly, they are assuming that the unique causes of the responses are measurement error. In the present example, individuals may interpret the words “worry” and “nervous” somewhat differently and this may elicit different mental processes that result in slightly different responses. However, the two terms are sufficiently similar that they also elicit similar cognitive processes that produce a correlation between responses to the two items. Under this assumption, the common causes reflect the causes that are of interest and the unique causes produce error variance. Under the assumption that unique causes produce error variance, it is possible to average responses to similar items. These averages are called scales. Averaging amplifies the variance that is produced by shared causes.
This is illustrated in the next figure where the average is fully determined by the two observed variables “I often worry” and “I am often nervous.” To make this a measurement model, we have to relate the average scores to the unobserved variables. Now we see that the shared mental process variable has two ways to influence the average scores, whereas each of the unique causes has only one way to contribute to the average. As the number of variables increases the ratio (2:1) becomes even bigger for the shared variable (3 variables, 3:1). This implies that the shared mental processes more and more determine the average scores. This is the only part of measurement theory that psychologists are taught and understand as reflected in the common practice to report Cronbach’s alpha (a measure of the shared variance in the average scored) as evidence that a measure is a good measure (Flake & Fried, 2020). However, the real measurement problems are not addressed by averaging across similarly-worded items. This is revealed in the next figure.
To use the average of responses to similarly worded items as an observed measure of an unobserved personality trait, we have to assume that the shared mental processes that produce most of the variance in the average scores are caused by the personality trait that we are trying to measure. In the present example, personality psychologists use items like “worry” and “nervous” to measure a trait called Neuroticism. Despite 100 years of research, it is still not clear what Neuroticism is and some psychologists still doubt that Neuroticism even exists. Those who do believe in Neuroticism assume it reflects a general disposition to have more negative thoughts (e.g., low self-esteem, pessimism) and feelings (anxiety, anger, sadness, guilt). The main problem in current personality research is that item-averages are often treated as if they are perfect observed indicators of an unobserved personality trait (see next figure).
Ample research suggests that average scores of neuroticism items are also influenced by other factors such as socially desirable responding. Thus, it is a simplification to assume that item-averages are identical or isomorphous to the personality trait that they are designed to measure. Nevertheless, it is common for personality psychologists to study the influence of unobserved causes like Neuroticism by means of item averages. As we see later, even when psychologists use latent variable models, Neuroticism is just a label for an item-average. The problem with this practice is that it gives the illusion that we can study the causal effects of unobservable personality traits by examining the correlations of observable item-averages.
In this way, measurement problems are treated as unimportant, just like behaviorists considered mental processes as unimportant and relegated them to a black box that should not be examined. The same attitude prevails today with regards to personality measurement, when boxes (observed variables) are given names without checking that the labels actually match the content of the box (i.e., the unobserved causes that a measure is supposed to reflect). Often psychological constructs are merely labels for item-averages. Accordingly, neuroticism is ‘operationalized’ with an item-average and neuroticism can be defined as “whatever a neuroticism scale measures.”
When Things Go from Bad to Worse
In the 1980s, personality psychologists came to a broad consensus that the diversity of human traits (e.g., anxious, bold, curious, determined, energetic, frank, gentle, helpful, etc.) can be organized into a taxonomy with five broad traits, known as the Big Five. The basic idea is illustrate in the next Figure with Neuroticism. According to Big Five theory, Neuroticism is a general disposition to experience more anxiety, anger, and sadness. However, each emotion also has its one dispositions. Thus, variation in scales that measure anxiety, anger, and sadness is influenced by both Neuroticism (i.e., the general disposition) and specific causes. In addition, scales can also be influenced by general and specific measurement errors. The figure makes it clear that the scores in the item-averages can reflect many different causes aside from the intended broader personality trait called Neuroticism. This makes it risky to rely on these item averages to draw inferences about the unobserved variable Neuroticism.
A true science of personality would try to separate these different causes and to examine how they relate to other variables. However, personality psychologists often hide the complexity of personality measurement by treating personality scales as if they directly reflect a single cause). While this is bad enough, things get even worse when personality psychologists speculate about even broader personality traits.
The General Personality Factor (Musek, 2007)
The Big Five were considered to be roughly independent from each other. In fact, they were found with a method that looked for independent factors (another name for unobserved variables) more commonly used in personality research. However, when Digman (1997) examined correlations among item-averages, he found some systematic patterns in these correlations. This led him to postulate even broader factors than the Big Five that might explain these patterns. The problem with these theories is that they are no longer trying to relate observed variables to unobserved variables. Rather, Digman started to speculate about causal relationships among unobserved variables on the basis of imperfect indicators of the Big Five.
The first problem with Digman’s attempt to explain correlations among unobserved variables was that he lacked expertise in the use of psychometric models. As a result, he made some mistakes and his results could not be replicated (Anusic et al., 2009). A few years later, a study that controlled for some of the measurement problems by using self-ratings and informant ratings suggested that the Big Five are more or less independent and that correlations reflect measurement error (Biesanz & West, 2004; see also Anusic et al., 2009). However, other studies suggested that higher-order factors exists and may have powerful effects on people’s lives, including their well-being. Subsequently, I am going to show that these claims are based on a simple misunderstanding of measurement models that treat unique variance in the Big Five scales as error variance.
Musek (2007) proposed that correlations among Big Five scales can be explained with a single higher-order factor. This model is illustrated in his Figure 1.
First, it is notable that the unique mental processes that contribute to each of the Big Five scales are called e1 to e5 and the legend of the figure explains that e stands for error variances. This terminology can be justified if we treat Big Five scales only as observed variables that help us to observe the unobserved variable GFP. As GFP is not directly observable, we have to infer its presence from the correlations among the observed variables, namely the Big Five scales. However, labeling the unique causes that produce variation in Neuroticism scores error variance is dangerous because we may think that the unique variance in Neuroticism is no longer important; just error. Of course, this variance is not error variance in some absolute sense. After all, Neuroticism scales exists only because personality psychologists assume that Neuroticism is a real personality trait that is related to even more specific traits like anxiety, anger, and sadness. Thus, all of the variance in a neuroticism scale is assumed to be important and it would be wrong to assume that only the variance shared with other Big Five scales is important. To avoid this misinterpretation, it would be better to keep the unique causes in the model.
Another problem of this model is that the model itself provides no information about the actual causes of the correlations among the Big Five scales. This is different when items are written for the explicit purpose of measuring something that they have in common. In contrast, the correlations among the Big Five traits are an empirical phenomenon that requires further investigation to understand the nature of the causal processes that produce correlations. In other words, GFP is just a name for “shared cognitive processes;” it does not tell us what these shared cognitive processes are. To examine this question, it is necessary to see how the GFP is related to other things. This is where things go horribly wrong. Rather than relating the unobserved variable in Figure 1 to other measures, Musek (2007) averages all Big Five items to create an item average that is supposed to represent the unobserved variable. He then uses correlations of the GFP scale to make inferences about the GFP factor. The problems of this approach are illustrated with the next figure.
The figure illustrates that the general personality scale is not a good indicator of the general personality factor. The main problem is that the scale scores are also influenced by the unique causes that contribute to variation in the Big Five scales (on top of measurement error that is not shown in the picture to avoid clutter, but should not be forgotten). The problem is hidden when the unique causes are represented as errors, but unique variance in neuroticism is not error variance. It reflects a disposition to have more negative thoughts and this disposition could have a negative influence on life-satisfaction. This contribution of unique causes is hidden when Big Fife scale scores are averaged and labeled General Personality.
Musek (2007) reports a correlation of r = .5 (Study 1) between the general personality scale and a life-satisfaction scale. Musek claims that this high correlation must reveal a true relationship between the general factor of personality and life-satisfaction and cannot reflect a method artifact like social desirable responding. It is unclear why Musek (2007) relied on an average of Big Five scale scores to examine the relationship of the general factor with life-satisfaction. Latent variable modeling makes it possible to examine the relationship of the general factor directly without the need for scale scores. Fortunately, it is possible to conduct this analysis post-hoc based on the reported correlations in Table 1.
The first model created a general personality scale and used the scale as a predictor of life-satisfaction. The only difference to a simple correlation is that the model also includes the implied measurement model. This makes the model testable because it imposes restrictions on the correlations of the Big Five scales with the life-satisfaction scale. The fit of the model was acceptable, but not great, suggesting that alternative models might produce even better fit, RMSEA = .078, CFI = .958.
In this model, it is possible to trace the paths from the unobserved variables to life-satisfaction. The strongest relationship was the path from the general personality factor (h) to life-satisfaction, b = .42, se = .04, but the model also implied that unique variances of the Big Five scales contribute to life-satisfaction. These effects are hidden when the general personality scale is interpreted as if it is a pure measure of the general personality factor.
A direct test of the assumption that the general factor is the only predictor of life-satisfaction requires a simple modification of the model that links life-satisfaction directly to the general factor (h). This model actually fits the data better, RMSEA = .048, CFI = .984. This might suggest that the unique causes of variation in the Big Five are unrelated to life-satisfaction.
However, good fit is not sufficient to accept a model. It is also important to rule out plausible alternative models. An alternative model assumes that the Big Five factors are necessary and sufficient to explain variation in life-satisfaction. There is no reason to create a general scale and use it as a predictor. Instead, life-satisfaction can simply be regressed onto the Big Five scales as indicator of the Big Five factors. In fact, it is always possible to get good fit for a model that uses indicators as predictors of outcomes because the model does not impose any restrictions (i.e., the model is just identified). The only reason why this model fits worse than the other model is that fit indices like RMSEA and CFI reward parsimony and this model uses 5 predictors of life-satisfaction whereas the previous model had only one predictor. However, parsimony cannot be used to falsify a model.
In fact, it is possible to find an even better fitting model because only two of the five Big Five scales were significant predictors of life-satisfaction. This finding is consistent with many previous studies that these two Big Five traits are the strongest predictors of life-satisfaction. If the model is limited to these two predictors, it fits the data better than the model with a direct path from the general factor, CFI = .987, RMSEA = .045. Musek (2007) was unable to realize that the unique variances in neuroticism and extraversion make a unique contribution to life-satisfaction because the general personality scale does not separate shared and unique causes of variation in the Big Five scales.
The Correlated Big Two
In contrast to Musek (2007), DeYoung and Peterson favor a model with two correlated higher-order factors (DeYoung, Peterson, & Higgins, 2002; see Schimmack, 2022, for a detailed discussion).
As Musek (2007) they treat the unique causes of variation in Big Five traits as error (e1-e5) and assume that relationships of the higher-order factors with criterion variables are direct rather than being mediated by the Big Five factors. Here, I fitted this model to Musek’s (2007) data. Fit was excellent, CFI = .996, RMSEA = .030.
Based on this model, life-satisfaction would be mostly predicted by stability rather than neuroticism and extraversion or a general factor. However, just because this model has excellent fit doesn’t mean it is the best model. The model simply masks the presence of a general factor by modeling the shared variance between Plasticity and Stability as a correlated residual. It is also possible to model it with a general factor. In this model, Stability and Plasticity would be an additional level in a hierarchy between the Big Five and the General Factor. This model does not impose any additional restrictions and fits the data as well as the previous model, CFI = .996, RMSEA = .030. Thus, even though Stability and Plasticity can be identified, it does not mean that this distinction is important for the prediction of life-satisfaction. The general factor could still be the key predictor of life-satisfaction.
However, both models make the assumption that the unique causes of variation in Big Five scales are unrelated to life-satisfaction, and we already saw that this assumption is false. As a result, the model that relates life-satisfaction to neuroticism and extraversion fits the data, CFI = .994, RMSEA = .035, and the paths from extraversion and neuroticism to life-satisfaction were significant.
Musek (2007) and DeYoung et al. (2006) ignored the possibility that unique causes of variation in the Big Five contribute to the prediction of other variables because they made the mistake to equate unique variances with error variances. This interpretation is based on the basic examples that are used to illustrate latent variable models for beginners. However, the interpretation of all aspects of a latent variable model, including the residual or unique variances has to be guided by theory. To avoid these mistakes, psychometricians need to stop presenting their models as if they can be used without substantive theory and substantive researchers need to get better training in the use of psychometric tools.
Conclusion
Compared to other sciences like physics, astronomy, chemistry, or biology, psychology has made little progress over the past 40 years. While there are many reasons for this lack of progress, one problem is the legacy of behaviorism to focus on observable behaviors and to rely on experimentation as the only scientific approach to test causal theories. Another problem is an ideological bias against personality as a causal force that produces variation between individuals (Mischel, 1968). To make progress, personality science has to adopt a new scientific approach that uses observed behaviors to test causal theories of unobservable forces like personality. While personality scales can be used to predict behaviors and life-outcomes, they cannot explain behaviors and life-outcomes. Latent variable modeling provides a powerful tool to test causal theories. The biggest advantage of latent variable modeling is that model fit can be used to reject models. A cynic might think that this is the main reason why they are ot used more by psychologists because it is more fun to build a theory and confirm it rather than to find out that it was false, but fun doesn’t equal scientific progress.
P.S. What about Network Models?
Of course, it is also possible to reject the idea of unobserved variables altogether and draw pictures of the correlations (or partial correlations) among all the observed variables. The advantage of this approach is that it always produces results that can be used to tell an interesting story about the data. The disadvantage is that it always produces a result and therefore doesn’t test any theory. Thus, these models cannot be used to advance personality psychology towards a science that progresses by testing and rejecting false theories.
Awards, Ivy League universities, or prestigious journals are suboptimal heuristics to evaluate people’s work, but in a world of information overflow, they influence the popularity of ideas. Therefore, I am caching in on Jason Geller’s invitation to present z-curve in the Advanced Research Methods seminar at Princeton.
The talk was recorded and Jason and Princeton University generously shared the recording with me (Video). The talk builds on previous talks, but incorporates the latest z-curve findings that demonstrate the power of z-curve to predict replication failures and to justify the use of alpha = .005 as a reasonable criterion for significance tests to keep the risk of false positive results in psychological journals at a reasonably low level.
You can find many other z-curve related articles and studies on my blog. Here I want to mention only the two peer-reviewed articles that introduced the method and provide more detailed information about the method.
Recently, a team of German sociologists combined data about racial biases in police stops in the United States (Stanford Open Policing Project ; Pierson et al., 2020) and data about county-level average levels of racial biases collected by Project Implicit (Xu et al., 2022). The key finding was that various measures of racial bias were correlated with racial bias in traffic stops by police (published in the Supplement Table 2).
The authors missed an opportunity to examine the validity of different measures of racial attitudes under the assumption that all measures, implicit and explicit, reflect a common attitude rather than distinct attitudes (Schimmack, 2021). If implicit measures tapped some distinct form of unconscious bias, they should show incremental predictive validity. To examine this question, I used the correlations in Table 2 and fitted a structural equation model to the data. I found that a model with a single racial bias factor fitted the data reasonably well, chi2 (df = 9, N ~ 300) = 34.52, CFI = .975, RMSEA = .097. The effect size of b = .369 for bias implies that for every increase in bias by one standard deviation, there is a .369 increase in racial bias in traffic stops. This is considered a moderate effect size in comparison to other effect sizes in the social sciences.
,The more interesting result is that the race IAT and simple self-report measures of racial bias are equally valid measures of counties’ average level of racial bias. The effect sizes are .797 for the feeling thermometer, .784 for a simple preference rating, and .834 for the race Implicit Association Test; a computerized task that is less susceptible to socially desirable responding. The high validity coefficients of these measures can be explained by the aggregation of individuals’ scores. Aggregation reduces random measurement error as well as systematic biases that are unique to individuals. Thus, the present results show that race IAT scores are valid measures of racial biases at the aggregated level. The results also show that self-ratings provide as much valid information. This undermines claims by Greenwald, who developed the IAT, that the race IAT is a more valid measure of racial biases than self-ratings (see also Schimmack, 2021, for studies at the individual level).
The figure also shows an additional relationship between the race IAT and the weapons IAT. This relationship reveals that IAT tasks reflect some information that is not captured by self-reports. However, it is not clear whether this variance is method variance or valid variance of unconscious bias. In the latter case, the unique variance in the race IAT could predict police stops in addition to the bias factor (incremental predictive validity).
Adding this path did not improve model fit and the effect size estimate was not significantly different from zero, b = -.045, 95%CI = -.305 to .214. These results are consistent with many other results that the incremental predictive validity of the race IAT is elusive and even if it is not zero, it is likely to be negligible (Kurdi et al., 2019).
In short, the article could have made a nice contribution to the literature by demonstrating that implicit and explicit measures of racial bias show high convergent validity when they are aggregated to measure racial bias of US counties, and by demonstrating that racial bias predicts an important behavior, namely police officers’ decision to conduct a traffic stop.
However, the discussion of the results in the article is problematic and may reveal a sociological bias or the lack of lived experience of German researchers. The authors interpret the results as evidence that situational factors explain the results.
“The observed relationships between regional-level bias and police traffic stops underscore the role of the context in which police officers operate. Our findings are consistent with theorizing by Payne et al. (2017), who argued that some contexts expose individuals more regularly to stereotypes and/or prejudice, increasing mental accessibility of biased thoughts and feelings, in turn influencing individual behavior. Consequently, behavioral expressions of prejudice and stereotypes often reflect properties of contexts rather than stable dispositions of people (but see Connor & Evers, 2020).”
The plausible alternative explanation is relegated to a “but see.” As a German who has lived in the United States and is constantly exposed to US media while living in Canada, I think the “but” deserves more attention and is actually a more plausible explanation of these findings. After all, police officers are not Robo-Cops or United Nations soldiers. They are typically born and raised in the county or in close proximity they are working in (Flint Town). As a result, their own racial biases are likely to be similar to the racial biases measured in the Project Implicit data (see Andersen et al., 2021, for race IAT scores of police officers). Thus, it is entirely possible that racial biases of police officers, rather than some mysterious unidentified social context, contribute to the racial biases in police stops. This does not mean that social factors are not at play. The fact that racial bias is not some involuntary, unconscious bias means that better training and incentives can be used to reduce bias in police officers’ behaviors without changing their attitudes and feelings. Traffic stops are clearly deliberate actions that are not made in a split second. Thus, officers can be trained in avoiding biases in their actions without the need to change their implicit or explicit attitudes. Although attitude change would be desirable, it is difficult and will take time. For now, Black citizens are likely to settle for equal treatment rather than waiting for changes in implicit attitudes that are difficult to measure and have no known effects on behavior.
In conclusion, it is well known that racism is a problem among US police officers. Often these officers are known and remain on the force. This study shows that these racial attitudes have clear consequences that sometimes lead to the death of innocent Black civilians. To attribute these incidences to some abstract contextual factors ignores the lived experiences of thousands of African Americans. The data are fully consistent with the common assumption of African Americans that racists cops are more likely to pull them over. The present study showed that this fear is more justified in counties with higher levels of racism.
Lew Goldberg has made important contributions to personality psychology. He contributed to the development of the Big Five model that is currently the most widely accepted model of the higher-order factors of personality that describe the relationship among the basic trait words used in everyday language.
He also pioneered open science when he made a large pool of personality items available to all researchers and created open and free measures that mimic proprietary measures like Costa and McCrae’s NEO scales. Because these measures were designed to measure the original scales as closely as possible, the validity of the scales is defined in terms of correlations with the existing scales. The goal of the IPIP project was not to examine validity or to improve on existing measures. As Lew pointed out in a personal correspondence to me, users of IPIP measures could have created new measures based on the initial 300 items. The fact that users of these items have failed to do so shows a lack of interested in construct validation. Thanks to Lew Goldberg, we have open items and open data to develop better measures of personality.
The extended 300-item IPIP measure has been used to provide thousands of internet users free feedback about their personality, and Johnson made his data from these surveys openly available (OSF-data).
The present critical examination of the psychometric properties of the IPIP scales would not be possible without these contributions. My main criticism of personality measurement is that personality psychologists have not used the statistical tools that are needed to validate a personality measure. A common and false belief among personality psychologists is that these tools are not suitable for personality measures. A misleading article by McCrae, Costa and colleagues in the esteemed Journal of Personality and Social Psychology did not help. The authors were unable to fit a Big Five model to their data. Rather than questioning the model, they decided that the method is wrong because “we know that the Big Five model is right”. This absurd conclusion has been ridiculed by psychometricians (Borsboom, 2006), but led only to a defensive response by personality psychologists (Clark, 2006). For the most part, personality psychologists today continue to create scales or use scales that lack proper validation. The IPIP-300 is no exception. This blog post is just illustrates with a simple example how bad measurement can derail science.
The IPIP-300 aims to measure 30 personality traits that are called facets. Facets are constructs that are more specific than the Big Five and closer to everyday trait concepts. Each facet is measured with 10 items. The 10 items are summed or averaged to give individuals a score for one of the 30 facets. Each facet has a name. There are two ways to interpret these names. One interpretation is that the name is just a short-hand for a scientific construct. For example, the term Depression is just a name for the sum-scores of 10-items from the IPIP. To know what this sum score actually measures, one might need to examine the item content, learn about the correlations of this sum-score with other sum-scores, or understand the scientific theory that let to the creation of the 10-items. Accordingly, the Depression scale measures whatever it is supposed to measure and what this is is called Depression. In this case, we could change the name of the scale without changing anything in our understanding of the scale. We could call it the D-scale or just facet number 3 of Neuroticism. Depression is just a name. The alternative view assumes that the 10-items were selected to measure a construct that is at least somewhat related to what we mean by depression in our everyday language. For example, we would be surprised to see the item “I like ginger” or “I often break the rules” in a list of items that are supposed to measure depression. The use of everyday trait worlds as labels for scales does usually imply that researchers are aiming to measure a construct that is at least similar to the everyday meaning of the label. Unfortunately, this is often not the case and interpreting scales based on their labels can lead to misunderstandings.
To illustrate the problem of misch-mesch-urement, I am using two facets scales from the IPIP-300 that are labeled Depression and Modesty. I used the first 10,000 observations in Johnson’s dataset and selected only US respondents with complete data (N = 6,786). The correlation between Depression and Modesty was r = .35, SE = .01. I replicated this finding with the next 10,000 observations, again selecting only US respondents with complete data (N = 5,864), r = .39, SE = .01. The results clearly show a moderate positive relationship between the two scale scores. A correlation of r = .35 implies that a respondent who is above average in Depression has about a 67.5% probability to be also above average in Modesty. We could now start speculating about the causal mechanism that produces this correlation. Maybe bragging (not being modest) reduces the risk of depression. Maybe being depressed lowers the probability of bragging. Maybe it is both and maybe there are third variables at play. However, before we even start down this path, we have to consider the possibility that the sum score labels are misleading and we are not even seeing the correlation between the constructs that we have in mind when we talk about depression and modesty. This question is examined by fitting a measurement model to the items that were used to create the sum scores.
Of course, the two scales were chosen because a simple measurement model does not fit the data. This is shown with a simplified figure of a measurement model that assumes the 10 items of a scale all reflect a common construct and some random measurement error. The items are summed to reduce the random measurement error so that the sum score mostly reflects the common construct. The main finding is that this simple model does not meet standard criteria of acceptable fit such as a Comparative Fit Index (CFI) greater than .95 or a Root Mean Square Error of Approximation (RMSEA) below .06. Another finding is that the correlation between the factors (i.e., unobserved variables that are assumed to cause the shared variance among items) is even stronger, r = .69, than the correlation among the scales. This would be interpreted as evidence that measurement error reduces the correlation with scales and the correlation among the factors shows the true correlation. However, the model does not fit and the correlation should not be interpreted.
Inspection of the items suggests some reasons why the simple model may not fit and why the positive correlation is at least inflated, if not totally an artifact. For example, the item “Have a low opinion of myself” is used as an item to measure Depression, while the item “Have a high opinion of myself ” is reversed and used to measure Modesty (reverse scoring means that low ratings on this item are scored as high modesty). Just looking at the items, we might suspect that they are both measures of low and high self-esteem, respectively. While it is plausible that Depression and Modesty are linked to low self-esteem, but it is a problem to use self-esteem items to measure both. This will produce an artificial positive correlation between the scales and lead to the false impression that Depression and Modesty are positively correlated when they are actually unrelated or even negatively related. This is what I call the misch-masch problem of personality measurement. Scales are foremost averages of items and it is not clear what these scales measure if the scales are not properly evaluated with a measurement model.
As items are closer to the level of everyday conversations about personality, it is not difficult to notice other similarities between items. For example, “often feel blue” and “rarely feel blue” are simply oppositely worded questions about the same feeling. These items should correlate more strongly (negatively) with each other than the item “rarely feel blue” and “feel comfortable with myself”. However, our interpretation of items may differ from the interpretation of the average survey respondent. Thus, we need to examine empirically the pattern of correlations. One reason why personality researchers do not do this is another confusion caused by a bad label. The best statistical tool to explore the pattern of correlations among items with called Confirmatory Factor Analysis. The label “Confirmatory” has led to the false impression that this method can only be used to confirm a theoretical model. But when a model like the simple model in Figure 1 does not fit, we do not have a theory to suggest a more complex model. We could of course explore the data, but the term confirmatory implies that this would be wrong or an abuse of a method that should not be used for exploration. This is pure nonsense. We can use CFA to explore the data, find a plausible model that fits the data, and then confirm this model with a new dataset. We can then also use this model to make new predictions, test them, and if the predictions fail, further revise the model. This is called science and fully in line with Cronbach and Meehl’s (1955) approach to construct validation. Why do I make such a big deal about this? Because my suggestion to use CFA to explore personality data has been met with a lot of resistance by veteran personality psychologists.
In response to a related blog post, William Revelle wrote me an email.
Uli, Inspired by your blog on how one needs to use CFA to do hierarchical models (which is in fact, incorrect), I prepared the enclosed slides. I try to point out that EFA approaches can a) give goodness of fit tests and b) do hierarchical models. In a previous post you suggested that those of us in personality should know some psychometrics and not use simple sum scores. I think you are correct with respect to the first part of your argument, but you might find my paper with Keith Widaman a useful response suggesting that sum scores are not as bad as you think. Your comment about some people (i.e., our Dutch friend) refusing to understand the silliness of a general factor of personality was most accurate. Bill
Bill is right that EFA can sometimes produce the right results, but this is not a good argument to use an inferior method. The key problem of EFA is that it does not require any theory and as a result also does not test a theory. If a model does not fit, researchers cannot change the model because the model is based on a stringent set of mathematical principles that are not based on any substantive theory. In contrast, CFA requires that researchers think about their data and why the model does not fit.
In response to my CFA analysis of Costa and McCrae’s NEO-PI-R, Robert McCrae wrote this response:
Uli I just read your blog on “what lurks beneath”. I must say that I find the blog format disconcerting, both for its informality and its lack of editing and references. But here are a few responses. 1. We certainly agree that people ought to measure facets as well as domains; that personality is not simple structured; that there is some degree of evaluative bias in any single source of data. 2. What we argued in the 1996 paper was that CFA “as it has typically been applied in investigating personality structure, is systematically flawed” (p. 552, italics added). I should think you would agree with that position; you have criticized others for failing to acknowledge secondary loadings and evaluative biases in their CFAs. 3. Why in the world do you think that “CFA is the only method that can be used to test structural theories”? If that were true, I would agree with your position. But the major point of our paper was to offer an alternative confirmatory approach using targeted rotation. There are a number of instances where this method has led to falsification of hypotheses—John’s study of personality in dogs and cats showed that the FFM doesn’t fit even after targeted rotation. 4. I would have liked a comparison with Marsh’s ESEM, which was developed in part in response to our 1996 paper. 5.”The evaluation of model fit was still evolving”. That, I would say, is an understatement. In my experience, most fit indices in SEM and other statistical approaches are essentially as arbitrary as p < .05. There are virtually no empirical tests of the utility of fit indices. And most are treated as dichotomies: A model fits or not. That is like deciding that coefficient alpha should be .70, and throwing out a scale because its alpha is only .69. I recall a paper on national levels of traits in which the authors were told by reviewers not to report the observed means because they could not demonstrate measurement invariance. This is statistically-mandated data suppression. 6. I am not quite convinced by your analysis of evaluative bias in the NEO data. It is really difficult to separate substance from style in mono-method data. One could argue that the factor you call EVB is really N, and vice-versa. I have attached a chapter in which we reported joint factor analyses of self-reports and observer ratings and included bias factors (pp. 280-283). –Jeff
I was fortunate to take a CFA (SEM) course offered by Ralf Schwarzer at the Free University Berlin in the early 1990s. I have been using LISREL, EQS, and now MPLUS for 30 years. I thought, the older professors were just too old to learn this method, and that the attitudes would change. However, in 2006 Borsboom wrote his attack on bad practices in personality research and measurement is still considered a secondary topic in graduate education. This attitude towards measurement has been called a measurement-schmeasurement attitude (Flake & Fried, 2020). It is time to end this embarrassing status quo and to take measurement seriously.
After exploring the data and trying many different models, I settled on a model that fits the data. I then cross-validated this model in the second dataset. However, given the large sample sizes, the structure is very robust and the model had nearly identical fit in the second dataset. The model fit of the cross-validated model also met standard fit criteria, CFI = .983, RMSEA = .035. This does not mean that it is the best model. As the data are open, other researchers could try to find better models. Importantly, minor differences between models are not important, as long as the main results are consistent. The model also does not automatically tell us what the 10-item scales measure. This question can only be answered with additional data that relate the factors in the model to other variables. However, we can at least see how items are related to the factors that the scales aimed to measure.
Figure 2 shows that it is possible to describe the correlations among items from the same scale with three factors that are simply labeled Dep1, Dep2, and Dep3 for Depression and Mod1, Mod2, and Mod3 for Modesty. Dep1 is mainly related to feeling blue and depressed. Dep2 is related to low self-esteem. Dep 3 is related to two items that might be interpreted as pessimism. Mod1 is related to low self-esteem, Mod2 is about bragging, and Mod3 is about avoiding being the center of attention. As predicted by the similar wording, two self-esteem items of Mod2 are also related to the Dep2 factor. In addition, the Modesty factor is also related to Dep2, presumably because modest participants do not rate themselves lower on self-estem items. However, there is no relationship to Dep1, the feeling blue factor. Thus, Modesty is not related to feeling depressed, as implied by the Depression label of the scale. In fact, the correlation between the Depression and Modesty factors is now close to zero. Thus, the strong correlation in the bad fitting model and the moderate correlation based on scale scores misrepresents the relationship between Depression and Modesty.
Simple models of two facets are just a building block along the way to testing more complex models of personality. I hope you realize that this is an important step before personality scales can be used for research and before people are given feedback about their personality online. You might be surprised that not all personality psychologists agree. Some personality researchers rather publish pretty pictures of the models in their heads without checking that they actually fit real data. For example, Colin DeYoung has published this picture to illustrate his ideas about the structure of personality.
This model implies that there should be a negative correlation between the Depression facet of Neuroticism and the Modesty facet of Agreeableness because Stability has a negative relationship with Neuroticism and a positive relationship with Agreeableness (minus times plus = minus). I shared my initial results that showed a positive correlation which contradicts his model (see also our published results by Anusic et al., 2009, that showed problems with the Stability factor).
His final response was:
“Uli, I think the problem is that the actual structure is too complex to make it easily represented in a single CFA model. The point of the pictures is to show only some important aspects of the actual structure. As long as one acknowledges it’s only part of the structure, I don’t see that as a problem.”
To my knowledge he has never attempted to specify his model in more detail to accommodate findings that are inconsistent with this simple model. He also does not seem very eager to explore this question using CFA.
I suppose I could try to create a more complete CFA model, starting from the 10 aspects, which would allow correlations between Enthusiasm and Compassion and between Politeness and Assertiveness, and also would include additional paths from Plasticity and Stability to certain aspects, but even then I’d be wary of claiming it was the complete structure. Whatever might be left out could still easily lead to misfit. It would take a lot of chutzpah to claim that one was confident in understanding all details of the covariance structure of personality traits.
To me this sounds like an excuse for bad fit. The picture gets it right, even if the model does not fit. This is the same argument that was ridiculed by Borsboom’s critique of Costa and McCrae. If models are immune to empirical tests, they are merely figments of researchers’ imagination. To make scientific claims one first needs to pass the first test: show that a model fits the data, and if a simple model does not fit the data, we need to reject the simple model and find a better one. As Revelle pointed out, nowadays EFA software can also show fit indices. What he doesn’t say is that the typical EFA models have bad fit and that there is not much EFA users can do when this is the case. In contrast, CFA can be used to explore the data, find plausible models with good fit, like the one in Figure 2, and then test these models with new data. Call me crazy, but I have the chutzpah and confidence that I can find a well-fitting model for the structure of personality. In fact, I have already done so (Schimmack, 2019), and now I am working on doing the same for the IPIP-300. Stay tuned for the complete results. I hope this post made it clear why it is important to examine this question even for measures that have been used for decades in hundreds of studies.
Post-Script: When a figure says less than zero words
In a further email exchange Colin DeYoung asked me to add the following clarification.
COLIN:
Uli, please add the following quote to your blog post. You are misrepresenting me inasmuch as you are claiming that my theoretical position requires that your model of modesty and depression should show a negative correlation between modesty and depression. This is not true. I would absolutely never predict that, and I think quoting the passage here makes it clear why that is:
“A final note on the hierarchy shown in Fig. 1: It is necessarily an oversimplification at the levels below the Big Five, because personality does not have simple structure (Costa & McCrae, 1992; Hofstee, de Raad, & Goldberg, 1992). Some facets and aspects have associations, not depicted in the figure, with factors in other domains. This is true even between some traits located under different metatraits, which could not be related if the diagram in Fig. 1 were complete. For example, Compassion is positively related to Enthusiasm, and Politeness is negatively related to Assertiveness (DeYoung et al., 2007, 2013).”
ULI:
Happy to add this to the blog post, but I do have to ask. Is there any finding that you would take seriously to revise your model or is this model basically unfalsifiable?
After all, I also fitted a model without higher-order factors and aspects to the 30 facets. It would be really interesting to do a multi-method study with facet-factors as starting point, but I don’t know a study that did that or any data to do it.
Thanks, Uli. Please do add that text to your blog post as my explanation of the figure.
As that text points out, what you’re calling my “model” is in fact just a summary of various empirical results. It is not, and has never been, intended as a formal CFA model.
[Explanation: The figure uses the symbolic language of causal modeling that links factors (circles) to other factors (circles) with arrows pointing from one factor to another (implying a causal effect or at least a representation of shared variance among factors that are related to a common higher-order factor. It is not clear what this figure could tell readers unless we believe that factors are real and at some point explain a pattern of observed correlations. To say that the model is not a CFA model is to say that the model makes no empirical predictions and that factors like Stability or Plasticity only exit as constructs in Colin’s imagination. Not sure why we should print such imaginary models in a scientific article.]
Psychologists have studied dating (also sometimes called mating by evolutionary psychologists) for 100 years (more or less). We are therefore able to give young, unexperienced novices expert advice. This advice is particularly important for young men because the human mating ritual in many cultures still puts them in the position of the actor who has to initiate a complex mating ritual (Elain on Seinfeld: “We mostly play defense”). The leading experts from elite universities like Harvard are willing to share their knowledge, but these personalized courses are not yet available, and probably not free.
Fortunately, I am able to provide free expert advice with a brief instructional video that illustrates all the things you should NOT do on a first date. Just do the opposite and you will be fine. Please add further suggestions in the comment section. Advice from classy women is especially welcome.
The ideal model of science is that scientists are well-paid with job security to work collaboratively towards progress in understanding the world. In reality, scientists operate like monarchs in the old days or company CEO’s in the modern world. They try to expand their influence as much as possible. This capitalistic model of science could work if there was a market that rewards CEO’s of good companies for producing better products at a cheaper price. However, even in the real world, markets are never perfect. In science, there is no market and success is much more driven by many factors that have nothing to do with the quality of the product.
The products of empirical scientists often contain a valuable novel contribution, even if the overall product is of low quality. The reason is that empirical psychologists often collect new data. Even this contribution can be useless when the data are not trustworthy, as the replication crisis in social psychology has shown. However, often data are interesting and when shared can benefit other researchers. Scientists who work in non-empirical fields (e.g., mathematicians, philosophers, statisticians) do not have that advantage. Their products are entirely based on their cognitive abilities. Evidently, it is much easier to find some new data, then to come up with a novel idea. This creates a problem for non-empirical scientists because it is a lot harder to come up with an empire-expanding novel idea. This can be seen by the fact that the most famous philosophers are still Plato and Aristoteles and not some modern philosopher. It can also be seen by the fact that it is hard for psychometricians to compete with empirical researchers for attention and jobs. Many psychology departments have stopped hiring psychometricians because empirical researchers often add more to the university rankings. Case in point, my own university, including all three campuses, is one of the largest departments in the world and does not have a formally trained psychometrician. Thus, my criticism of psychometricians should not be seen as a personal attack. Their unhelpful behaviors can be attributed to a reward structure that rewards unhelpful behaviors, just like Skinner would have predicted on the basis of their reward schedule.
Measurement Models without Substance
A key problem for psychometricians is that they are not rewarded for helping empirical psychologists who work on substantive questions. Rather, they have to make contributions to the field of psychometrics. To have a big impact, it is therefore advantages to develop methods that can be used by many researchers who work on different research questions. This is like ready-to-wear clothing. The empirical researcher just needs to pick a model and plug the data into the model and the truth comes out at the other end. Many readers will realize that ready-to-wear clothing has its problems. Mainly, it may not fit your body. Similarly, a ready-to-use statistical model may not fit a research question, but users of statistical models who are not trained in statistics may not realize this and psychometricians have no interest in telling them that their model is not appropriate. As a result, we see many articles that uncritically use statistical models that are applied to the wrong data. To avoid this problem, psychometricians would have to work with empirical researchers like tailors who create custom -fitted clothing. This would produce high-quality work, but not the market influence and rewards that read-to-wear companies can make.
Don’t take my word for it. The most successful contemporary psychometrician said so himself.
“The founding fathers of the Psychometric Society—scholars such as Thurstone, Thorndike, Guilford, and Kelley—were substantive psychologists as much as they were psychometricians. Contemporary psychometricians do not always display a comparable interest with respect to the substantive field that lends them their credibility. It is perhaps worthwhile to emphasize that, even though psychometrics has benefited greatly from the input of mathematicians, psychometrics is not a pure mathematical discipline but an applied one. If one strips the application from an applied science one is not left with very much that is interesting; and psychometrics without the “psycho” is not, in my view, an overly exciting discipline. It is therefore essential that a psychometrician keeps up to date with the developments in one or more subdisciplines of psychology.“ (Borsboom, 2006)
Borsboom has carefully avoided his own advice and became a rock-star for his claims that the founding people of psychometrics were all delusional because they actually believed in substances that could be measured (traits) and developed methods to measure intelligence, personality, or attitudes. Borsboom declared that personality does not exist and the tools that are used to claim they exist like factor analysis are false, and the way researchers present evidence for the existence of psychological substances outlined by two more founding psychometricians (Cronbach & Meehl, 1955) was false. Few psychometricians who gave him an award realized that his Attack of the Psychometricians (Borsboom, 2006) was really an attack of one ego-maniac psychometrician on the entire field. Despite Borsboom’s fame as measured by citations, his attack is largely ignored by substantive researchers who couldn’t care less about somebody who claims their topic of study is just a figment of imagination without any understanding of the substantive area that is being attacked.
A greater problem are psycho-metricians who market statistical tools that applied researchers actually use without understanding them. And that is what this blog-post is really about. So, end of ranting and on to showing how psychometrics without substance can lead to horribly wrong results.
Michael Eid’s Truth Factor
Psychometrics is about measurement and psychological measurement is not different from measurement in other disciplines. First, researchers assume that the world we live in (reality) can be described and understood with models of the world. For example, we assume that there is something real that makes us sometimes sweat, sometimes wear just a t-shirt, and sometimes wear a thick coat. We call this something temperature. Then we set out to develop instruments to measure variation in this construct. We call these instruments thermometers. The challenging step in the development of thermometers is to demonstrate that they measure temperature and that they are good measures of temperature. This step is called validation of a measure. A valid measure measures what it is supposed to measure and nothing else. The natural sciences have made great progress by developing better and better measures of constructs we all take for granted in everyday life like temperature, length, weight, time, etc. (Cronbach & Meehl, 1955). To make progress, psychology would also need to develop better and better measures of psychological constructs such as cognitive abilities, emotions, personality traits, attitudes, and so on.
The basic statistical tool that psychometricians developed to examine validity of psychological measures is factor analysis. Although factor analysis has developed and has become increasingly easy and cheap with the advent of powerful personal computers, the basic idea of factor analysis has remained the same. Factor analysis relates observed measures to unobserved variables that are called factors and estimates the strength of the relationship between the observed variable and the unobserved variable to provide information about the variance in a measure that is explained by a factor. Variance explained by the factor is valid variance if the factor represents the construct that a researcher wanted to measure. Variance that is not explained by a factor represents measurement error. The key problem for substantive researchers is that a factor may not correspond to the construct that they were trying to measure. As a result, even if a factor explains a lot of the variance in a measure, the measure could be a poor measure of a construct. As a result, the key problem for validation research is to justify the claim that a factor measures what it is assumed to measure.
Welcome to Michael Eid’s genius short-cut to the most fundamental challenge in psychometrics. Rather than conducting substantive research to justify the interpretation of a factor, researchers simply declare one measure as a valid measure of a construct. You may thin, surely, I am pulling wool over your eyes and nobody could argue that we can validate measures by declaring them to be valid. So, let me provide evidence for my claim. I start with Eid, Geiser, Koch, and Heene’s (2017) article that is built on the empire-expanding claim that all previous applications of another empire-expanding model called the bi-factor model, are false and that researchers need to use the authors model. This article is flagged as highly-cited in WebofScience showing that this claim has struck fear in applied researchers who were using the bi-factor model.
One problem for applied researchers is that psychometricians are trained in mathematics and use mathematical language in their articles which makes it impossible for applied researchers to understand what they are saying. For example, it would take me a long time to understand what this formula in Eid et al.’s article tries to say.
Fortunately, psychometricians have also developed a simpler language to communicate about their models that uses figures with just four elements that are easy to understand. Boxes represent measured variables where we have actual scores of people in a sample. Circles are unobserved variables where we do not have scores of individuals. Straight and directed arrows imply a causal effect. The key goal of a measurement model is to estimate parameters that show how strong these causal effects are. Finally, there are also curved and undirected paths that reflect a correlation between two variables without assuming causality. This simple language makes it possible for applied researchers to think about the statistical model that they are using to examine validity of their measures. Eid et al.’s Figure 1 shows the bi-factor models they criticize with an example of several cognitive tasks that were developed to measure general intelligence. In this model, general intelligence is an unobserved variable (g). Nothing in the bi-factor model tells us whether this factor really measures intelligence. So, we can ignore this hot-button issue and focus on the question that the bi-factor model actually can answer. Are the tasks that were developed to measure the g-factor good measures of the g-factor. To be a good measure, a measure has to be strongly related to the g-factor. Thus, the key information that applied researchers care about are the parameter estimates for the directed paths from the g-factor to the 9 observed variables. Annoyingly, psychometricians use Greek letters to refer to these parameters. An English term is factor loadings and we could just use L for loading to refer to these parameters, but psychometricians feel more like scientists when they use the Greek letter lambda.
But how can we estimate the strength of an unobserved variable on an observed variable? This sounds like magic or witch craft and some people have argued that factor analysis is fundamentally flawed and produces illusory causal effects of imaginary substances. In reality, factor analysis is based on the simple fact that causal process produce correlations. If there are really people who are better at cognitive tasks, they will do better one different tasks, just like athletic people are likely to do better on several different sports. Thus, a common cause will produce correlations between two effects. You may remember this from PSY100 where this is introduced as the third-variable problem. The correlation between height and hair-length (churches and murder rates, etc.) does not reveal a causal effect of height on hair-length or vice versa. Rather, it is produced by a common cause. In this case, gender explains most of the correlation between height and hair-length because men tend to be taller and tend to have shorter hair, producing a negative correlation. Measurement models use the relationship between correlation and causation to infer the strength of common causes on the basis of the strength of correlations among the observed variables. To do so, they assume that there are no direct causal effects of one measure on another. That is, just because we measured your temperature under your arm pits before we measured it in your ear and moth, does not produce correlations among the three measures of temperature. This assumption is represented in the Figure by the fact that there are no direct relationships among the observed variables. The correlations merely reflect common causes and when three measures of temperature are strongly correlated, it suggests that they are all measuring the same common cause.
A simple model of g might assume that performance on a cognitive measure is influenced by only two causes. One is the general ability (g) that is represented by the directed arrow from g to the variable that represents variation in a specific task and another due to factors that are unique to this measure (e.g., some people are better at verbal tasks than others). This variance that is unique to a variable is often omitted from figures, but is part of the model in Figure 1.
The problem with this model is that it often does not fit the data. Cognitive performance does not have a simple structure. This means that some measures are more strongly correlated than a model with a single g-factor predicts. Bi-factor models model these additional relationships among measures with additional factors. They are called S1, S2, and S3 (thank god, they didn’t call them sigma or some other Greek name) and S stands for specific. So, the model implies that participants’ scores on a specific measure are caused by three factors: the general factor (g), one of the three specific factors (S1, S2, or S3), and a factor that is unique to a specific measure. The model in Figure 1 is simplistic and may still not fit the data. For example, it is possible that some measures that are mainly influenced by S2 are also influenced a bit by S1 and S3. However, these modifications are not relevant for our discussion, and we can simply assume that the model in Figure 1 fits the data reasonably well.
From a substantive perspective, it seems plausible that two cognitive measures could be influenced by a general factor (e.g., some students do better in all classes than others) and some specific factors (e.g., some students do better in science subjects). So, while the bi-factor model is not automatically the correct model, it would seem strange to reject it a priori as a plausible model. Yet, this is exactly what Eid et al.’s (2017) are doing based on some statistical discussion that I honestly cannot follow. All I can say is that it from a substantive point of view, a bi-factor model is a reasonable specification of the assumption that cognitive performance can be influenced by general and specific factors and that this model predicts stronger correlations among measures that tap the same specific abilities than measures that share only the general factor as a common cause.
After Eid et al. convinced themselves, reviewers, and an editor at a prestigious journal that their statistical reasoning was sound, they proposed a new way of modeling correlations among cognitive performance measures. They call it, the Bifactor-(S-1) model. The key difference between this model and the bi-factor model is that the authors remove one of the specific factors from the model; hence, S – 1.
You might say, but what if there is specific variance that contributes to performance on these task? If these specific factors exist, they would produce stronger correlations between measures that are influenced by these specific factors and a model without this factor would not fit the data (as well as the model that includes a specific factor that actually exists). Evidently, we cannot simply remove factors willy-nilly without misrepresenting the data. To solve this problem, the bi-factor (S-1) model introduces new parameters that help the model to fit the data as well or better than the bi-factor model.
Figure 4 in Eid et al.’s article makes it possible for readers who are not statisticians to see the difference between the models. First, we see that the S1 factor has been removed. Second, we see that the meaningful factor names (g = general and s = specific) have been replaced by obscure Greek letters where it is not clear what these factors are supposed to represent. The Greek letter tau (I had to look this up) stands for T = true score. Now true score is not a substantive entity. It is just a misleading name for a statistical construct that was created for a measurement theory that is called classic, meaning outdated. So, the bi-factor (S-1) model no longer claims to measure anything in the real world. There is no g-factor that is based on the assumption that some people will perform better on all cognitive tasks that were developed to measure this common factor. There are also no longer specific factors because specific factors are only defined when we first attribute performance to a general factor and see that other factors also have a common effect on subsets of measures. In short, the model is not a substantive model that aims to measure. It is like creating thermometers without assuming that temperature exists. When I discussed this with Michael Eid years ago, he defended this approach to measurement without constructs with a social-constructionistic philosophy. The basic idea is that there is no reality and that constructs and measures are social creations that do not require validation. Accordingly, the true score factor measures what a researcher wants to measure. We can simply pick two or three correlated measures and the construct becomes whatever produces variation in these three measures. Other researchers can pick other measures and the factors that produce variation in these measures are the construct. This approach to measurement is called operationalism. Accordingly, constructs are defined by measures and intelligence is whatever some researchers shows to measure and call intelligence. Operationalism was rejected by Cronbach and Meehl (1955) and led to the development of measurement models that can be used to examine whether a measure actually measures what it is intended to measure. The bifactor (S-1) model avoids this problem by letting researchers chose measures that define a construct without examining what produces variation in these measures.
“One way to define a G factor in a single-level random experiment is to take one domain as a reference domain. Without loss of generality, we may choose the first domain (k = 1) as reference domain and take the first indicator of this domain (i = 1) as a reference indicator. This choice of the reference domain and indicator indicator depends on a researcher’s theory and goals” (Eid et al., 2017, p. 550).
While the authors are transparent about the arbitrary nature of true scores – what is true variance depends on researchers’ choice of which specific factors to remove – they fail to point out that this model cannot be used to test the validity of measures because there is no longer a claim that factors correspond to real-world objects. Now both the measures and the constructs are constructed and we are just playing around with numbers and models without testing any theoretical claims.
Assuming the bi-factor model fits the data, it is easy to explain what the factors in the bi-factor (S-1) model are and why it fits the data. Because the model removed S-1, the true-score factor now represents the g-factor and the S1 factor. The G+S1 factor still predicts variance in the S2 and S3 measures because of the g-variance in the G+S1 factor. However, because the S1-variance in the G+S1 factor is not related to the S2 and S3 measures, the G+S1 factor explains less variance in the S2 and S3 measures than the g-factor in the bi-factor model. The specific factors in the bi-factor (S-1) model with the Greek symbol zeta (?) now predict more variance in the S2 and S3 measures because they not only represent the specific variance, but also some of the general factor variance that is not removed by using the contaminated g+S1 factor to account for shared variance among all measures. Finally, because the zeta factors now contain some g-variance that is shared between S2 and S3 measures, the two zeta factors are correlated. Thus, g-variance is split into g-variance in the g+S1 factor and g-variance that is common to the zeta factors.
Eid et al. might object that I assume the g-factor is real and that this may not be the case. However, this is a substantive question and the choice between the bi-factor model and the bi-factor (S-1) model has to be based on broader theoretical consideration and eventually empirical tests of the competing models. To do so, Eid et al. would have to explain why the two zeta-factors are correlated, which implies an additional common cause for S2 and S3 measures. Thus, the empirical question is whether it is plausible to assume that in addition to a general factor that is common to all measures, S2 and S3 have another common cause that is not shared by S1 measures. The key problem is that Eid et al. are not even proposing a substantive alternative theory. Instead, they argue that there are no substantive questions and that researchers can pick any model they want if it serves their goals. “This choice of the reference domain and indicator indicator depends on a researcher’s theory and goals” (p. 550).
If researchers can just pick and chose models, it is not clear why they could not just pick the standard bi-factor model. After all, the bi-factor (S-1) model is just an arbitrary choice to define the general factor in terms of items without a specific factor. What is wrong with choosing to all for all measures to be influenced by specific factors as in the standard bi-factor model. Eid et al. (2014) claim that this model has several problems. The first claim is that the bi-factor model often produced anomalous results that are often not consistent with the a priori theory. However, this is a feature of modeling, not a bug. What are the chances that a priori theories always fit the data? They whole point of science is to discover new things and new things often contradict our a prior notions. However, psychologists seem to be averse to discovery and have created the illusion that they are clairvoyant and never make mistakes. This narcissistic delusion has impeded progress in psychology. Rather than recognizing that anomalies reveal problems with the a priori theory, they blame the method for these results. This is a stupid criticism of models because it is always possible to modify a model and find a model that fits the data. The real challenge in modeling is that often several models fit the data. Bad fit is never a problem of the method. It is a problem of model misspecification. As I showed, proper exploration of data can produce well-fitting and meaningful models with a g-factor (Schimmack, 2022). This does not mean that the g-factor corresponds to anything real, nor does it mean that it should be called intelligence. However, it is silly to argue that we should prefer models with a general factor and simply pick some measures to create constructs that do not even aim to measure anything real.
Anther criticism of standard bi-factor models is that the loadings (i.e., the effect sizes of the general factor on measures) are unstable. “That means, for example, that the G-factor of intelligence should stay the same (i.e., “general”) when one takes out four of 10 domains of intelligence” (p. 546). Eid et al. point out that this is not always the case.
“Reise (2012), however, found that the G factor loadings can change when domains are removed. This causes some conceptual problems, as it means that G factors as measured in the bifactor and related models are not generally invariant across different sets of domains used to measure them. This can cause problems, for example, in literature reviews or meta-analyses that summarize data from different studies or in so-called conceptual replications in which different domains were used to measure a given G factor, because the G factors may not be comparable across studies.” (p. 546).
This is nonsense. First of all, the problem that results are not comparable across studies is much greater when researchers just start arbitrarily selecting sets of measures as indicators of the general+S factor because the g+S1, g+S2, g+S3 factors are conceptually different. All reall sciences have benefited from unification and standardization of measurement by selecting the best measures. In contrast, only psychologists think we are making progress by developing more and more measures. The use of bi-factor (S-1) models makes it impossible to compare measures because they are all valid measures of researchers’ pet constructs. Thus, use of this model will further impede progress in psychological measurement.
Eid et al. (2014) also exaggerate the extent to which results depend on the choice of measures in the bi-factor model. The more measures are highly correlated and reflect the full range of measures, the more results will be stable and comparable. Moreover, the only reason for notable changes in loadings would be mismeasurement of the general factor because some specific factors were not properly modeled. To support my claim, I used the data from Brunner et al. (2012) who fitted a bi-factor model to 14 measures of g. I randomly split the 14 measures into two sets of 7 and fitted a model with two g-factors and let the two factors correlate. The magnitude of this correlation shows how much inferences about g would depend on the arbitrary selection of measures. The correlation was r = .96 with a 95%CI ranging from .94 to .98. While number-nerds might get a hard-on because they can now claim that results are unstable, p < .05, applied researchers might shrug and think that this correlation is good enough to think they measured the same thing and it is ok to combine results in a meta-analysis.
In sum, the criticism of bi-factor models is all smoke and mirrors to advertise another way of modeling data and to grab market share from the popular bi-factor model that took away market share from hierarchical models. All of this is just a competition among psychometricians to get attention that doesn’t advance actual psychological research. The real psychometric advances are made by psychometricians who created statistical tools for applied researchers like Jorekog, Bentler, and Muthen and Muthen. These tools and substantive theory are all that applied researchers need. The idea that statistical considerations can constrain the choice of models is misleading and often leads to suboptimal and wrong models.
Readers might be a bit skeptical that somebody who doen’t know the Greek alphabet and doesn’t understand some of the statistical arguments is able to criticize trained psychometricians. After all, they are experts and surely must know better what they are doing. This argument ignores the systemic factors that make them do things that are not in the best interest of science. Making a truly novel and useful contribution to psychometrics is hard and many well-meaning attempts will fail. To make my point, I present Eid et al.’s illustration of their model with a study of emotions. Now, I may not be a master psychometrician, but nobody can say that I lack expertise in the substantive area of emotion research and in attempts to measure emotions. My dissertation in 1997 was about this topic. So, what did Eid et al. (2017) find when they created a bi-factor (S-1) measurement model of emotions?
Eid et al. (2017) examined the correlations among self-reports of 9 specific negative emotions. To fit their model to the data, they used the Anger domain as the reference domain. Not surprisingly, anger, fury and rage had high loadings on the true score factor (falsely called the g-factor) and the other negative emotions had low loadings on this factor. This result makes no sense and is inconsistent with all established models of negative affect. All we really learn from this model is that a factor that is mostly defined by anger also explains a small amount of variance in sadness and self-conscious negative emotions. Moreover, this result is arbitrary and any one of the other emotions could have been used to model the misnamed g-factor. As a result, there is nothing general about the g-factor. It is a specific factor by definition. “The G factor in this model represents anger intensity” (p. 553). But why would we call a specific emotion factor a general factor. This makes no theoretical sense. As a result, this model does not specify any meaningful theory of emotions.
A proper bi-factor or hierarchical model would test the substantive theory that some emotions covary because they share a common feature. The most basic feature of emotions is assumed to be valence. Based on this theory, emotions with the same valence are more likely to co-occur , which results in positive correlations among emotions of the same valence. Hundreds of studies have confirmed this prediction. In addition, emotions also share specific features such as appraisals and action tendencies. Emotions who also share these components are more likely to co-occur than emotions with different or even opposing appraisals. For example, pride and gratitude are based on opposing appraisals of attribution to self or others. A measurement model of emotions might represent these assumptions in a model with one or two general factors for valence (the dimensionality of valence is still debated) and several specific factors. In this model, the general factor has a clear meaning and represents the valence of an emotion. Fitting such a model to the data is useful to test the theory. Maybe the results confirm the model, may be they don’t. Either way, we learn something about human emotions. But if we fit a model that does not include a factor that represents valence and misleadingly label an anger-factor a general factor, we learn nothing, except that we should not trust psychometricians who build models without substantive expertise. Sadly, Eid has actually made good contributions to emotion research in the 1990s that has identified broad general factors of affect that he appears to have forgotten. Accordingly, he would have modeled affect along three general dimensions (Steyer, Schwenkmezger, Notz, & Eid, 1997).
Concluding Rant
In conclusion, the main point of this blog post is that psychometricians benefit from developing ready-to-use, plug-and-play models that applied researchers can use without thinking about the model. The problem is that measurement requires understanding of the object that is being measured. Thermometers do not measure time and clocks are not good measures of weight. As a result, good measurement requires substantive knowledge and custom models that are fitted to the measurement problem at hand. Moreover, measurement models have to be embedded in a broader model that specifies theoretical assumptions that can be empirically tested (i.e., Cronbach & Meehl’s, 1955, nomological network). The bi-factor (S-1) model is unhelpful because it avoids falsification by letting researchers define constructs in terms of an arbitrary set of items. This may be useful for scientists who want to publish in a culture that values confirmation (bias), it is not useful for scientist who want to explore the human mind and need valid measures to do so. For these researchers, I recommend to learn structural equation modeling from some of the greatest psychometricians who helped researchers like me to test substantive theories such as Joreskog, Bentler, and now Muthen and Muthen. They provide the tools, you need to provide the theory and the data and be willing to listen to the data when your model does not fit. I learned a lot.
Psychology lacks solid foundations. Even basic methodological issues are contentious. In this tutorial, I revisit Brunner et al.’s (2012) tutorial on hierarchical factor analysis. The main difference between the two tutorials is the focus on confirmation versus exploration. I show how researchers can use hierarchical factor analysis to explore data. I show that exploratory HFA produces a plausible better fitting model than Brunner et al.’s confirmatory HFA. I also show that it is not possible to use statistical fit to compare hierarchical models to bi-factor models. To do so, I show that my hierarchical model fits the data better than their bi-factor model. Instead, the choice between hierarchical models and bi-factor models is a theoretical question, not a statistical question. I hope that this tutorial will help researchers to realize the potential of exploratory structural equation modeling to uncover patterns in their data that are not predicted a priori.
Instruction
About a decade ago, Brunner, Nagy, and Wilhelm (2012) published an informative article about the use of Confirmatory Factor Analysis to analyze personality data, using a correlation table of performance scores on 14 cognitive ability tasks as an example.
The discussed four models, but my focus is on the modeling of these data with hierarchical CFA or hierarchical factor analysis. It is not necessary to include confirmatory in the name because only CFA can be used to model hierarchical factor structures. EFA by definition has only one layer of factors. Sometimes researchers conduct hierarchical analysis by using correlations among weighted sum scores (factors) as indicators of lower levels in a hierarchy. However, this is a suboptimal approach to test hierarchical structures. The term confirmatory has also been shown to be misleading because many researchers believe CFA can only be used to test a fully specified theoretical model and any post-hoc modifications are not allowed and akin to cheating. This has stifled the use of CFA because theories are often not sufficiently specified to predict the pattern of correlations well enough to achieve good fit. It is also not possible to use an EFA for exploration and CFA for confirmation because EFA cannot reveal hierarchical structures. So, if hierarchical structures are present, EFA will produce the wrong model and CFA will not fit. Maybe the best term would be hierarchical structural equation modeling, but hierarchical factor analysis is a reasonable term.
One of the advantages of CFA over EFA is that CFA makes it possible to create hierarchical models and to provide evidence for the fit of a model to data. CFA also makes it possible to modify models and to test alternative models, while EFA solutions are determined by some arbitrary mathematical rules. In short, any researcher interested in testing hierarchical structures in correlational data should use hierarchical factor analysis.
Hierarchical models are needed when a single layer of factors is unable to explain the pattern of correlations. It is easy to test the presence of hierarchies in a dataset by fitting a model with a single layer of independent factors to the data. Typically, these models do not fit the data. For example, Big Five researchers abandoned CFA because this model never fit actual personality data. Brunner et al. also show that a model with a single factor (i.e., Spearman’s general intelligence factor) did not fit the data, although all 14 variables show strong positive correlations with each other. This suggests that there is a general factor (suggests does not equal proofs) and it suggests that there are additional relationships among some variables that are not explained by the general factor. For example, vocabulary and similarities are much stronger correlated, r = .755, than vocabulary and digit span, r = .555. The aim of hierarchical factor analysis is to model the specific relationships among subsets of variables.
Brunner et al. present a single hierarchical model with one general factor and four specific factors.
Their Table 2 shows the fit of this model in comparison to three other, non-hierarchical models.
The results show that the hierarchical model fits the data better than a model with a single g-factor, RMSEA = .071 vs. 132 (lower values are better). The results also show that the model fits the data not as well as the first order factor model. The difference between these models is that the hierarchical model assumes that the correlations among the four specific (first-order) factors can be explained by a single higher-order factor, that is, the g-factor. The reduction in fit can be explained by the fact that a single factor is not sufficient to explain this pattern of correlations. The single factor model uses four parameters (the ‘causal effects’ of the g-factor on the specific factor) to predict the six correlations among the four specific factors. This model is simpler as one can see by the comparison of degrees of freedom (73 vs. 71). it would be wrong to conclude from this model comparison that there is no general factor and to reject the hierarchical model based on this statistical comparison. The reason is that it is possible to use the extra degrees of freedom to improve model fit. If two parameters are added to the hierarchical model, fit will be identical to the first-order factor model. There are many ways to improve fit and the choice should be driven by theoretical considerations. For example, one might not want to include negative relationships to fit a model of only positive correlations, although this is by no means a general rule. Modification indices suggested additional positive correlations between the PO and PS factor and the PO and WM factor. Adding these parameters reduced the degrees of freedom by two and produced the same model fit as the model first-order factor model. Thus, it does not seem to be possible to reduce the six correlations among the Big Five factors to a model with fewer than six parameters. Omitting these additional relationships for the sake of a simple hierarchical structure is problematic because the model no longer reflects the pattern in the data. In short, it is always possible to fit the data with a hierarchical model that fits the data as well as a first-order model. Only the assumption that a single-factor accounts for the pattern of correlations among the first-order factors is restrictive and can be falsified. Just like the single factor model, the single-higher-order model will often not fit the data.
A more important comparison is the comparison of the hierarchical model with the nested factor model which is more often called the bi-factor model. Bi-factor models have become very popular among users of CFA and Brunner et al.’s tutorial may have contributed to this trend. The bi-factor model is shown in the next figure.
Theoretically, there is not much of a difference between hierarchical models. Both models assume that variance in an observed variable can be separated into three components. One component reflects variance that is explained by the general factor and leads to correlations with all other variables. One component reflects variance that is explained by a specific factor that is only shared with a subset of variables. And the third component is unique variance that is not shared with any other variable often called uniqueness, disturbance, residual variance, or erroneously error variance. The term error variance can be misleading because unique variance may be meaningful variance that is just not shared with other variables. Brunner et al.’s tutorial nicely shows the unique variance that is only shared among some items in the hierarchical model. This variance is often omitted from figures because it is the residual variance in first-order factors that is not explained by the higher-order factor. For example, in the figure of the hierarchical model, we see that variance in the specific factor VD is separated into variance explained by the g-factor and residual variance that is not explained by the g-factor. This residual variance is itself a factor (an unobserved variable) and this factor conceptually corresponds to the specific factors in the bi-factor model. Thus, conceptually the two models are very similar and may be considered interchangeable representations Yet, the bi-factor model fits the data much better than the hierarchical model, RMSEA = .60 vs. .71. Moreover, the bi-factor model meets (barely) the standard criterion of acceptable model fit for the RMSEA criterion (.06). Thus, readers may be forgiven if they think the bi-factor model is a better model of the data.
Moreover, Brunner et al.’s tutorial suggests that the higher-order model should only be chosen if it fits the data as well as the bi-factor model, which is not the case in this example. Thus, their recommendation leads to the assumption that the better fit of a bi-factor model can be used to decide between these two models. Here I want to show that this recommendation is false and that it is always possible to create a hierarchical model that fits as well as a bi-factor model. The reason is that the poorer fit of the hierarchical model is a cost of its parsimony. That is, it has 73 degrees of freedom compared to 64 degrees of freedom for the bi-factor model. This means that it is possible to add 9 parameters to the hierarchical model to improve fit within a hierarchical structure. Inspection of modification indices suggested that a key problem of the hierarchical model was that it underestimated the relationship of the arithmetic variable on the g-factor. This relationship is mediated (goes through) working memory and is influenced by the relationship of working memory with the g-factor. We can relax this assumption by allowing for a direct relationship between the arithmetic variable and the g-factor. Just this single additional parameter improved model fit and made the hierarchical model fit the data better than the bi-factor model., chi2(70) = 403, CFI = .980, RMSEA = .059. Of course, in other datasets more modifications may be necessary, but the main point generalizes from this example to other datasets. It is not possible to use model fit to favor bi-factor models over hierarchical models. The choice of one or the other model needs to be based on other criteria.
Brunner et al. (2012) discuss another possible reason to favor bi-factor models. Namely, it is not possible in a hierarchical model to relate a criterion variable to the general factor and to all specific factors. The reason is that this model is not identified. That is, it is not possible to estimate the direct contribution of the general factor because the general factor is already related to the criterion by means of the indirect relationships through the specific factors. While Brunner et al. (2012) suggest that this is a problem, I argue that this is a feature of a hierarchical model and that the effect of the general factor is just mediated by the specific factors. Thus, we do not need an additional direct relationship from the general factor to a criterion. We can simply estimate the effect of the general factor by computing the total indirect effect that is implied by the (a) effect of the general factor on the specific factors and (b) the effect of the specific factors on the criterion. In contrast, the bi-factor model provides no explanation about the (implied) causal effect of the general factor on the criterion. For example, high school grades in English might be related to the g-factor because the g-factor influences verbal intelligence (VC) and verbal intelligence is the more immediate cause of better performance in a language class. Thus, a key advantage of a hierarchical model is that it proposes a causal theory in which the effects of broad personality traits on specific behaviors are mediated by specific traits (e.g. effect of extraversion on drug use is mediated by the sensation seeking facet of Extraversion and not the Assertiveness facet of extraversion).
In contrast, a bi-factor model assumes that the general and the specific factors are independent. One plausible reason to use a bi-factor model would be the modeling of method factors like acquiescence or social desirability (Anusic et al., 2009). A method factor would influence responses to all (or most) items, but it would be false to assume that this effect is mediated by the specific factors that represents actual personality constructs. Instead, the advantage of CFA is that it is possible to separate method and construct (trait) variance to get an unbiased estimate of the effect size for the actual traits. Thus, the choice of a hierarchical model or a bi-factor model should be based on substantive theories about the nature of factors and cannot be made based in a theoretical vacuum.
How to Build a Hierarchical Model
There are no tutorials about the building of SEM models because the terminology of confirmatory factor analysis has led to the belief that it is wrong to explore data with CFA. Most of the time, authors may explore the data and then present the final model as if it was created before looking at the data; a practice known as Hypothesizing after Results are known (HARKing, 1998). Other times, authors will use a cookie-cutter model that they can justify because everybody uses the same model, even when it makes no theoretical sense. The infamous cross-lagged panel model is one example. Here I show how authors can create a plausible model that fits the data. There is nothing wrong with doing so and all mature sciences have benefitted from developing models, testing them, testing competing models against each other, and making progress in the process. The lack of progress in psychology can be explained by avoiding this process and pretending that a single article can move from posing a question to providing the final answer. Here I am going to propose a model for the 14 cognitive performance tests. It might be the wrong model, but at least it fits the data well.
The pattern of correlations in Table 1 shows many strong positive correlations. There is also 100-years of research on the positive manifold (positive correlations) among cognitive performance tasks. Thus, a good starting point is the one-factor model that simply assumes that a general factor contributes to scores on all 14 variables. As we already saw, this model does not fit the data. Modern software makes it possible to look for ways to improve the model by inspecting so called Modification Indices. As a first step, we want to look for big (juicy) MI that are unlikely to be chance results. MI are chi-square distributed with 1 df and chi-square values with 1 df are just squared z-scores (scores on the standard normal). Thus, an MI of 25 corresponds to z = 5, which is the criterion used in particle physics to rule out false positives. Surely, this criterion is good enough for psychologists. It is therefore problematic to publish models with MI > 25 because this model ignores some reliable pattern in the data. Of course, MI are influenced by sample size and the effect size may be trivial. However, it is better to include these parameters in the model and to show that they are small and negligible rather than to present an ill-fitting model that may not fit because it omitted an important parameters. Thus, my first recommendation is to inspect modification indices and to use them. This is not cheating. This is good science.
We want to first look for items that show strong residual correlations. A residual correlation is a correlation between the residual variances of two variables after we remove the variance that is shared with the general factor. It is ok to start with the highest MI. In this case, the highest MI is 273, z = 16.5, which is very, very, unlikely to be a chance finding. The suggested effect size for this residual correlation is r = .48. So, this relationship is clearly substantial. To add this finding to a hierarchical model, we do not simply add a correlated residual. Instead, we model this residual correlation as a simple factor in a hierarchical model. Thus, we create the first simple factor in our hierarchical model. Because this simple factor is related to the g-factor (unlike the bi-factor model where it would be independent of the general factor), we can estimate the strength of the factor loadings freely and the model is just identified. Optionally, we could constrain the loadings of the two items, which can sometimes be helpful in the beginning to stabilize the model. The first specific factor in the model represents the shared variance between Letter-Number-Sequencing and and Digit Span. A comparison with Brunner et al.’s model shows that this factor is related to the Working Memory factor in their model. The only difference is that their factor has an additional indicator, Arithmetic. So, we didn’t really do anything wrong by starting totally data-driven in our model.
The first simple-factor in the model is basically a seed to explore whether other items may also be related to this simple factor. Now that we have a factor to represent the correlation between the Letter-Number-Sequencing and Digit Span variables, we could find MI that suggest loadings of other variables on this factor. If this is not the case, other items may still be highly correlated and can be used to add additional simple factors. We can run the revised model and examine the MI. This revealed a significant MI = 48 for Arithmetic. This is also consistent with Brunner’s model and I added it. However, Arithmetic still had a direct loading on the general factor, which is not consistent with their model. In addition, there was a
It is well known that MI can be misleading. For example, the MI for this model suggested a strong loading of Matrix Reasoning on the Working Memory factor. This doesn’t seem implausible. However, Brunner et al.’s model suggests that Matrix Reasoning is related to another specific factor that is not yet included in the model. Therefore, I first looked for other residual correlations that could be the seed for additional specific factors. Also, MI for additional residual correlations were larger. The largest MI was for the residual correlation between Matrix Reasoning and Information. This is inconsistent with Brunner et al.’s model that suggested these items belong to different specific factors. However, to follow the data-driven approach, I used this pair as a seed for a new specific factor. The next big correlated residual was found for Block Design and Comprehension. Once more, this did not correspond to a specific factor in Brunner et al.’s model, but I added it to the model. The next run showed a still sizeable MI (22) for Digital-Symbol-Coding on the Working-Memory factor. So, I also added this parameter, but the effect size was small, b = .19. Thus, Brunner et al.’s model is not wrong, but omitting this parameter lowers overall fit. As there we no other notable MI for the Working Memory factor, I looked for the correlated residual with the highest MI as a seed for another specific factor. This was the correlated residual for Digital-Symbol-Coding and Symbol Search. This correlated residual is represented by the PS factor in Brunner et al.’s model. The next high modification index suggested a factor for Vocabulary and Comprehension. This is reflected in the VC factor in Brunner’s model. The MIs of this model suggested a strong loading of Similarity on the VC factor (MI = 297), which is also consistent with Brunner et al.’s model. Another big MI (256) suggested that Information also loads on the VC factor, again consistent with Brunner et al.’s model. The next inspection of MIs, however, suggested an additional moderate loading of Arithmetic on the VC factor that is not present in Brunner et al.’s model. I added it to the model and model fit improved, CFI = .976 vs. 973). Although the loading was only b = .29, not including it in a model lowers fit. The next round suggested another weak loading of letter-number sequencing on the PS factor. Adding this parameter further increased model fit, CFI = .978 vs. 976, although the effect size was only b = .23. The next big MI was for the correlated residual between Object-Assembly and Block-Design (MI = 82). This relationship is represented by the PO factor in Brunner et al.’s model.
At this point, the model already had good overall fit, CFI = .982, RMSEA = .055, although three variables were still unrelated to the specific factors, namely Picture Completion, Picture Arrangement, and Matrix Reasoning. It is possible that these variables are directly related to the general factor, but it is also possible to model the relationship as being mediated by specific factors. Theory would be needed to distinguish between these models. To examine possible mediating specific factors, I removed the direct relationship and examined the MI of this model with bad fit to find mediators. Consistent with Brunner et al.’s model, Picture-Comprehension, Picture-Arrangement, and Matrix-Reasoning showed the highest MI for the PO factor, although the MI for the g-factor was equally high or higher. Thus, I added these three variables as indicators to the PO factor.
At this point, the biggest MI (76) suggested a correlated residual between Object-Assembly and Block-Design. However, these two variables are already indicators of the PO factor. The reason for the residual correlation is that the PO factor also has to explain relationships with the other variables that are related to the PO factor. Apparently, Object-Assembly and Block-Design are more strongly related than the loadings on the PO factor predict. To allow for this relationship within a hierarchical model it is possible to add another layer in the hierarchy and specify a sub-PO factor that accounts for the shared variance between Object-Assembly and Block-Design. This modification improved model fit, CFI = .977 vs. .981. The RMSEA was .045 and in the range of acceptable fit (.00 to .06).
At this point most MI are below 25, suggesting that further modifications are riskier, but also only minor modifications that will not have a substantial influence on the broader model. Some of the remaining MI suggested negative relationships. While it is possible that some abilities are also negatively related to other, I decided not to include them in the model. One MI (31) suggested a correlation between the VC and WM factors. A simple way to improve fit would be to add this correlation to the model. However, there are other ways to accommodate this suggestions. Rather than allowing for a correlation between factors, it is possible to add secondary loadings of VC variables on the WM factor and vice versa. The MI for the correlation is often bigger because it combines modifications of several specific modifications. Thus, it is a theoretical choice which approach should be taken. One simple rule would be to add the correlation if all of the secondary loadings show the same trend and to allow for secondary loadings if the pattern is inconsistent. In this case, all of the VC variables showed a significant loading on the WM factor and I used these secondary loadings to modify the model.
There were only a few MI greater than 25 left. Two were correlated residuals of Arithmetic with Information and with Matrix Reasoning. Arithmetic was already complex and related to several specific factors. I therefore considered the possibility that this variable reflects several specific forms of cognitive abilities. The finding of correlated residuals suggested that even some of the unique variance of other variables is related to the Arithmetic task. To model this, I treated Arithmetic as a lower order construct that could be influenced by several of the other 14 variables. Exploration identified four variables that seemed to contribute uniquely to variance in the Arithmetic variable. I therefore removed Arithmetic as an indicator of specific factors and added it as a variable that is predicted by these four variables, namely Matrix-Reasoning, b = .32, Information, b = .28, Letter-Number-Sequencing, b = .18, and Digit Span, b = .13.
There was only one remaining MI greater than 25. This was a correlation between the PO-factor and the Information variable. Although the loading of Information on the PO factor did not meet the threshold, it would also improve model fit and be more interpretable. Thus, I added this parameter. The loading was weak, b = .12 and the value was not significant at the .01 level. The MI for the correlation between Information and the PO-factor was now below the 25 threshold. As this parameter is not really relevant, I decided to remove it and use this model as the final model. The fit of this final model met standard criteria of model fit, CFI = .990, RMSEA = .042. This fit is better than the fit of Brunner et al.’s hierarchical model, CFI = .970, RMSEA = .071, and their favored bi-factor model, CFI = .981, RMSEA = .060. While model fit cannot show that a model is the right model, model comparison suggests that models with worse fit are not the right model. This does not mean that these models are entirely wrong. Well-fitting models are likely to share some common elements. Thus, model comparison should not be based on a comparison of fit indices, but also compare the actual differences between the models. To make this comparison easy, I show how my model is similar and different to their model.
The Figure shows that key aspects of the two models are similar. One differences are a couple of weak secondary loadings of two VC variables on the WM factor. Including these parameters merely shows where the simpler model produces some misfit to the data. The second difference is the additional relationship between Object Assembly and Block Design. The models would look even more similar if this relationship were modeled by simply adding a correlated residual between these two variables. The interpretation is the same. There is some factor that influences performances on these two tasks, but not on the other 12 tasks. The biggest difference is the modeling of Arithmetic. In my model, Arithmetic shares variance with digit span and letter-Number Spacing and this shared variance includes variance that is explained by the WM factor and variance that is unique to these variables. In Brunner et al.’s model, Arithmetic is only related to the variance that is shared by Digit-Span, Letter-Number, not the unique variance of these two variables. In addition, my model shows even stronger relationships of Arithmetic with Information and Matrix Reasoning. These differences may have implications for theories about this specific task, but have no practical implications for theories that try to explain the general pattern of correlations. In conclusion, an exploratory structural equation modeling approach produces a hierarchical model that is essentially the same as a published model. Thus, conventions that prevent researchers from exploring their data with SEM are hindering scientific progress. One could argue that Brunner et al.’s hierarchical model is equally atheoretical and similarity merely shows that both models are wrong. To make this a valid criticism, intelligence researchers would have to specify a theoretical model and then demonstrate that it fits the data at least as well as my model. Merely claiming expertise in a substantive area is not enough. The strength of SEM is that it can be used to test different theories against each other. Eventually, more data will be needed to pit alternative models against each other. My model serves as a benchmark for other models even if it was created by looking at the data first. Developing theories of data is not wrong. In fact, it is not clear how one would develop a theory without having any data . And the worst practice in psychology is when researchers have theories without data and then look for data that confirm their theory and suppress contradictory evidence. This practice has led to the replication crisis in experimental social psychology.
Reliability and Measurement Error
Hierarchical factor analysis can be used to examine structures of naturally occurring objects (e.g., the structure of emotions, Shaver et al. 1987) or to examine the structure of man-made objects. CFA is often used for the latter purpose, when researchers examine the psychometric properties of measures that they created. In the present example, the 14 variables are not a set of randomly selected tasks. Rather, they were developed to measure intelligence. To be included as a measure of intelligence, a measure had to demonstrate that it correlates with other measures under the assumption that intelligence is a general factor that influences performance on a ranger of tasks. It is therefore not surprising that the 14 variables are strongly correlated and load on a common factor. The reason is that they were designed to be measures of this general factor. Moreover, intelligence researchers are not only interested in correlations among measures and factors. They also want to use these measures to assign scores to individuals. The assignment of scores to individuals is called assessment. Ideally, we would just look up individuals’ standing on the general factor, but this is not possible. A factor reflects shared variance among items, but we only know individuals standing on the observed variables , not the variance that is shared. To solve this problem, researchers average individuals’ scores on individual variables and use these sum-scores as a measure of individuals’ standing on the factor. The use of sum-scores creates measurement error because some of the variance in sum-scores reflects the variance in specific factors and the unique variances of the variables. Averaging across several variables can reduce measurement error because averaging reduces the influence of specific factors and residual variances. Brunner et al. (2020) discuss several ways to quantify the amount of construct variance (the g-factor) in sum-scores of the 14 items.
A simple way to explore this question is to add sum-scores to the model with a formative measurement model, where a new variable is regressed on the observed variables and the residual variance is fixed to zero. This new variable represents the variance in actual sum-scores and it is not necessary to actually create them and add them to the variable set. To examine the influence of the general factor, it is possible to use mediation analysis because the effect of the g-factor on the sum score is fully mediated by the 14 variables. This model shows that the standardized effect of the g-factor on the sum-scores is r = .96, which implies 92% of the variance is explained by the g-factor. It is also possible to examine the sources of the remaining 8% of variance that does not reflect the g-factor by examining mediated paths from the unique variances in the specific factors. The unique variance in the WM, PS, and PO factor explained no more than 1% each, but the unique VC variance explained 3% of the variance in sum scores. The remaining 2% can be attributed to unique variances in the 14 variables. Overall, these results suggest that the sum score is a good measure of the g-factor. This finding does not tell us what the g-factor is, but it does suggest that sum-scores are a highly valid measure of the g-factor.
The indirect path coefficients can be used to shorten a scale by eliminating variables that make a small contribution to the total effect. Following this approach I removed four variables, Arithmetic, Information, Comprehension, and Letter-Number Sequencing. The effect of the g-factor on this sum-score was as high, b = .96, as for the total scale. It is of course possible that this result is unique to this dataset, but the main point is that HFA can be used to determine the contribution of factors to sum scores in order to create measures and to examine their construct validity under the assumption that a factor corresponds to a construct (Cronbach & Meehl, 1955). To support the interpretation of factors, it is necessary to examine factors in relationship to other constructs, which also can be done using SEM. Moreover, evidence of validity is by no means limited to correlational evidence. Experimental manipulations can be added to an SEM model to demonstrate that a manipulation changes the intended construct and not some other factors. This cannot be done with sum scores because sum-scores combine valid construct variance and measurement error. As a result, validation of measures requires specification of a measurement model in which constructs are represented as factors and to demonstrate that factors are related to other variables as predicted.
Conclusion
This tutorial on hierarchical factor analysis was written in response to Brunner et al’s (2012) tutorial on hierarchically structured constructs. There are some notable differences between the two tutorials. First, Brunner et al. (2012) presented a hierarchical model without detailed explanations of the model. The model assumes that the 14 measures are first of all related to four specific factors and that the four specific factors are related to one general factor. This simple model implicitly assumes no secondary loadings and no additional relationships among first-order factors. These restrictive assumptions are unlikely to be true and often lead to bad fit. While Brunner et al. (2012) claim that their model has adequate fit, the RMSEA value of .071 is high and higher than the fit of the bi-factor model. Thus, this model should not be accepted without exploration of alternative models. The key difference between Brunner et al.’s tutorial and my tutorial is that Brunner et al. (2012) imply that their hierarchical model is the only possible hierarchical model that needs to be compared to models that do not imply a hierarchy like the bi-factor model. This restrictive view is shared with authors who claim simple CFA models should not include correlated residuals or secondary loadings. I reject this confirmatory straight-jacked that is unrealistic and leads to low fit. I illustrate how hierarchical models can be created that actually fit data. While this approach is exploratory, it has the advantage that it can produce theoretical models that actually fit data. These models can then be tested in future studies. Another difference between the two tutorials is that I present a detailed description of the process of building a hierarchical model. This process is often not explicitly described because CFA is presented a tool that examines the fit between an a priori theory and empirical data. Looking at the data and making a model fit data is often considered cheating. However, it is only cheating when authors fail to disclose that retrofitting of a model. Honest exploratory work is not cheating and is an integral part of science. After all, it is not clear how scientists could develop good theories in the absence of data that constrain theoretical predictions.
Another common criticism of factor models in general is that factor models imply causality while the data are only correlational. First of all, experimental manipulations can be added to models to validate factors, but sometimes this is not possible. For example, it is not clear how researchers could manipulate intelligence to validate an intelligence measure. Even if such manipulations were possible , the construct of intelligence would be represented by a latent variable. Some psychologists are reluctant to accept that factors correspond to causes in the real world. Even these skeptics have to acknowledge that test scores are real and something caused some individuals to provide a correct answer whereas others did not. This variance is real and it was caused by something. A factor first of all shows that the causes that produce good performance on one task are not independent of the causes of performance on another task. Without using the loaded term intelligence, the g-factor merely shows that all of the 14 have some causes in common. The presence of a g-factor does not tell us what this cause it, nor does it mean that there is a single cause. It is well-known that factor analysis does not reveal what a factor is and that factors may not measure what they were designed to measure. However, factors show which observed measures are influenced by shared causes when we can rule out the possibility of a direct relationship. That is, doing well on one one task does not cause participants to do well on another task. In a hierarchical model, higher-order factors represent causes that are shared among a large number of observed measures. Lower-order factors represent causes that are shared by a smaller number of factors. Observed measures that are related to the same specific factor are more likely to have a stronger relationship because they share more causes.
In conclusion, I hope that this tutorial encourages more researchers to explore their data using hierarchical factor analysis and to look at their data with open eyes. Careful exploration of the data may reveal unexpected results that can stimulate thoughts and propose new theories. The development of new theories that actually fit data may help to overcome the theory crisis in psychology that is based on an unwillingness and inability to falsify old and false theories. The progress of civilizations is evident in the ruins of old ones. The lack of progress in psychology is evident in the absence of textbook examples of ancient theories that have been replaced by better ones.
Personality psychologists have conducted hundreds of studies that relate various personality measures to each other. The good news about this research is that it is relatively easy to do and doesn’t cost very much. As a result, sample sizes are big enough to produce stable estimates of the correlations between these measures. Moreover, personality psychologists often study many correlations at the same time. Thus, statistical significance is not a problem because some correlations are bound to be significant.
The key problems with personality psychology is that many studies are mono-method studies. This often leads to spurious correlations that are caused by method factors (Campbell & Fiske, 1959). For example, self-report measures often correlate with each other because they are influenced by socially desirable responding. It is therefore interesting to find articles that used multiple-methods which allows it to separate method factors and personality factors.
One common finding from multi-method studies is that the Big Five personality traits often appear correlated when they are measured with self-reports, but not when they are measured with multiple methods (i.e., multiple raters) (Anusic et al., 2009; Biesanz & West, 2004; DeYoung, 2006). Furthermore, the correlations among self-ratings of the Big Five are explained by an evaluative or desirability factors.
Despite this evidence, some personality psychologists argue that the Big Five are related to each other by substantive traits. One model assumes that there are two higher-order factors. One factor produces a positive correlation between extraversion and openness and another factor produces positive correlations between Emotional Stability (low Neuroticism), Agreeableness, and Conscientiousness. These two factors are supposed to be independent (DeYoung, 2006). Another model proposes a single higher-order factor that is called the General Factor of Personality (GFP). This factor was originally proposed by Musek (2007) and then championed by the late psychologists Rushton. Plank suggested that bad theories die after their champion dies, but in this case Dimitri van der Linden has taken it upon himself to keep the GFP alive. I have met Dimitri at a conference many years ago and discussed the GFP with him, but evidently my arguments fell on deaf ears. My main point was that you need to study factors with factor analysis. A simple sum score of Big Five scales is not a proper way to examine the GFP because this sum score also contains variance of the specific Big Five factors. Apparently, he is too stupid or lazy to learn structural equation modeling to use CFA in studies of the GFP.
Instead, he computes weighted sum scores as indicators of factors and uses these sum scores to examine relationships of higher-order factors with intelligence.
The authors then find that the Plasticity scale is related to self-rated and objective measures of intelligence and interpret this as evidence that the Plasticity factor is related intelligence. However, the Plasticity scale is just an average of Extraversion and Openness and it is possible that this correlation is driven by the unique variance in Openness rather than the shared variance between Openness and Extraversion that corresponds to the Plasticity factor. In other words, the authors fail to examine how higher-order factors are related to intelligence because they do not examine this relationship of factors, which requires structural equation modeling. Fortunately, they provided the correlations among the measures in their two studies and I was able to conduct a proper test of the hypothesis that Plasticity is related to intelligence. I fitted a multiple-group model to the correlations among the Big Five scales (different measures were used in the two studies), the self-report of intelligence, and the scores on Cattell’s IQ test. Overall model fit was acceptable, CFI = .943, RMSEA = .050. Figure 1 shows the model. First of all, there is no evidence of Stability and Plasticity as higher-order factors, which would produce correlations between Extraversion (EE) and Openness (OO) and correlations between Neuroticism (NN), Agreeableness (AA), and Conscientiousness (CC). Instead, there was a small positive correlation between Neuroticism and Openness and between Agreeableness and Conscientiousness. There was evidence of a general factor that influenced self-ratings of the Big Five (N, E, O, A, C) and self-ratings of intelligence (sri), although the effect size for self-reported intelligence was surprisingly small. This might be due to the assessment of intelligence that may have led to more honest reporting. Most important, the general factor (h) was unrelated to performance on Cattell’s test. This shows that the factor is unique to the method of self-ratings and supports the interpretation of this factor as a method factor (Anusic et al., 2009). Finally, self-ratings and objective test scores reflect a common factor which shows some valid variance in self-ratings. This has been reported before (Borenau & Liebler, 1992). The intelligence factor was related to Openness, but not with Extraversion, which is also consistent with other studies that examined the relationship between personality and IQ scores. Evidently, intelligence is not related to Plasticity because plasticity is the shard variance between Extraversion and Openness and there is no evidence that this shared variance exist and no evidence that Extraversion is related to intelligence.
These results show that van der Linden and colleagues came to the wrong conclusion because they did not analyze their data properly. To make claims about higher-order factors, it is essentially to use structural equation modeling. Structural equation modeling shows that the Plasticity and Stability higher-order factors are not present in these data (i.e., the pattern of correlations is not consistent with this model) and it shows that only Openness is related to intelligence which can also be seen by just inspecting the correlation tables. Finally, the authors misinterpret the relationship between the general factor and self-rated intelligence. “First, their [high GFP individuals] intellectual self-confidence might be partly rooted in their actual cognitive ability as SAI and g shared some variance in explaining Plasticity and the GFP” (p. 4). This is pure nonsense. As is clearly visible in Figure 1, the general factor is not related to scores on Cattell’s test and as a result it cannot be related to the shared variance between test scores and self-rated intelligence that is reflected in the i factor in Figure 1. There is no path linking the i-factor with the general factor (h). Thus, individuals standing on the h-factor is independent of their actual intelligence. A much simpler interpretation of the results is that self-rated intelligence is influenced by two independent factors. One is rooted in accurate self-knowledge and correlates with objective test scores and the other is rooted in overly positive ratings on desirable traits and is related to the tendency to do so across all traits. Although this plausible interpretation of the results is based on a published theory of personality self-ratings (Anusic et al., 2009), the authors simply ignore it. This is bad science, especially in correlational research that requires testing of alternative models.
In conclusion, I was able to use the authors data to support an alternative theory that they deliberately ignored because it challenges the authors’ prior beliefs. There is no evidence for a General Factor of Personality that gives some people a desirable personality and others an undesirable one. Instead, some individuals exaggerate their positive attributes in self-reports. Even if this positive bias (self-enhancement) were beneficial, it is conceptually different from actually possessing these attributes. Being intelligent is not the same as thinking that one is intelligent, and thinking that one understands personality factors is different from actually understanding personality factors. I am not the first critic of personality psychologists’ lack of clear thinking about factors (Borsboom, 2006).
“In the case of PCA, the causal relation is moreover rather uninteresting; principal component scores are “caused” by their indicators in much the same way that sumscores are “caused” by item scores. Clearly, there is no conceivable way in which the Big Five could cause subtest scores on personality tests (or anything else, for that matter), unless they were in fact not principal components, but belonged to a more interesting species of theoretical entities; for instance, latent variables. Testing the hypothesis that the personality traits in question are causal determinants of personality test scores thus, at a minimum, requires the specification of a reflective latent variable model (Edwards & Bagozzi, 2000). A good example would be a Confirmatory Factor Analysis (CFA) model.”
In short, if you want to talk about personality factors, you need to use CFA and examine the properties of latent variables. It is really hard to understand why personality psychologists do not use this statistical tool when most of their theories are about factors as causes of behavior. Borsboom (2006) proposed that personality psychologists dislike CFA because it can disprove theories and psychologists seem to have an unhealthy addiction to confirmation bias. Doing research to find evidence for one’s beliefs may feel good and may even lead to success, but it is not science. Here I show that Plasticity and Stability do not exist in a data-set and the authors do not notice this because they treat sumscores as if they were factors. Of course, we can average Extraversion and Openness and call this average Plasticity, but this average is not a factor. To study factors, it is necessary to specify a reflective measurement model, and there is a risk that a model may not fit the data. Rather than avoiding this outcome, it should be celebrated because falsification is the root of scientific progress. Maybe the lack of theoretical progress in personality psychology can be attributed to an avoidance to disconfirm existing theories.
In this blog post (pre-print), I examine the construct validity of the Elementary Psychopathy Assessment Super-Short Format scale (EPA-SSF) with Rose et al.’s (2022) open data. I examine construct validity by means of structural equation modeling. I find that the proposed 3-factor structure does not fit the data and find support for a four-factor structure. I also find evidence for a fifth factor that reflects a tendency to endorse desirable traits more and undesirable traits less. I find that most of the reliable variance in the scale scores is predicted by this factor, whereas substantive traits play a small role. I also show that the general factor contributes to the prediction of self-reported criminal behaviors. I find no evidence to support the inclusion of Emotional Stability in the definition of psychoticism. Finally, I raise theoretical objections about the use of sum scores to measure multi-trait constructs. Based on these concerns, I argue that the EPA-SSF is not a valid measure of psychoticism and that results based on this measure do not add to the creation of a nomological net surrounding the construct of psychoticism.
Introduction
Measurement combines invention and discovery. The invention of microscopes made it possible to see germs and to discovery the causes of many diseases. Turning a microscope to the skies allowed Galileo to make new astronomical discoveries. In the 20th century, psychology emerged as a scientific discipline and the history of psychology is marked by the development of psychological measures. Nowadays, psychological measurement is called psychometrics. Unfortunately, psychometrics is not a basic, fundamental part of mainstream psychological science. Instead, psychometrics is mostly taught in education departments and used for applied purposes of educational testing. As a result, many psychologists who use measures in their research have very little understanding of psychological measurement.
For any measure to be able to discover new things, it has to be valid. That is, the numbers that are produced by a measure should reflect mostly variation in the actual objects that are being examined. Science progresses when new measures are invented that can produce more accurate, detailed, and valid information about the objects that are being studied. For example, developments in technology have created powerful microscopes and telescopes that can measure small objects in nanometers and galaxies billions of lightyears away. In contrast, psychological measures are more like kaleidoscopes. They show pretty images, but these images are not a reflection of actual objects in the real world. While this criticism may be harsh, it is easily supported by the simple fact that psychologists do not quantify validity of their measures and that there are often multiple measures that claim to measure the same construct even though they are only moderately correlated. For example, at least eight different measures claim to be measures of narcissism without a clear definition of narcissism and without validity information that makes it possible to pick the best measure of narcissism (Schimmack, 2022).
A fundamental problem in psychological science is the way scientific findings are produced. Typically, a researcher has an idea, conducts a study, and then publishes results if the results support their initial ideas. This bias is easily demonstrated by the fact that 95% of articles in psychology journals are supportive of researchers’ ideas, which is an unrealistically high success rate (Sterling, 1959; Sterling et al., 1995). Journals are also reluctant to publish work that is critical of previous articles, especially if these articles are highly cited, and authors are often asked to be expert reviewers of work that is critical of their work. It would take extra-human strength to be impartial in these reviews, and these self-serving reviews are often the death of critical work. Thus, psychological science lacks the basic mechanism that drives scientific progress: falsification of bad theories or learning from errors. Evidence for the lack of self-correction that is a necessary element of science was produced during the past decade that was called the replication crisis, when researchers dared to publish replication failures of well-known findings. However, while the replication crisis has focused on empirical tests of hypotheses, criticism of psychological measures has remained relatively muted (Flake & Fried, 2020). It is time to use the same critical attitude that fueled the replication crisis and apply it to psychological measurement. I predict that many of the existing measures lack sufficient construct validity or are redundant with other measures. As a result, progress in psychological measurement would be marked by a consolidation of measures that is based on a comparison of measures’ construct validity. As one of my favorite psychologists once observed in a different context, in science “less is more” (Cohen, 1990), and this is also true for science. While cuckoo’s clocks are fun, they are not used for scientific measurement of time.
Psychopathy
A very recent article reviewed the literature on psychopathy (Patrick, 2022). The article describes psychopathy as a combination of three personality traits.
A conceptual framework that is helpful for assimilating different theoretical perspectives and integrating findings across studies using different measures of psychopathy is the triarchic model (Patrick et al. 2009, Patrick & Drislane 2015b, Sellbom 2018). This model characterizes psychopathy in terms of three trait constructs that correspond to distinct symptom features of psychopathy but relate more clearly to biobehavioral systems and processes. These are (a) boldness, which encompasses social dominance, venturesomeness, and emotional resilience and connects with the biobehavioral process of threat sensitivity; (b) meanness, which entails low empathy, callousness, and aggressive manipulation of others and relates to biobehavioral systems for affiliation (social connectedness and caring); and (c) disinhibition, which involves boredom proneness, lack of restraint, irritability, and irresponsibility and relates to the biobehavioral process of inhibitory control. (p. 389).
This definition of psychopathy raises several questions about the relationship between boldness, meanness, and disinhibition and psychopathy that are important for valid measurement of psychopathy. First, it is clear that psychopathy is a formative construct. That is psychopathy is not a common cause of boldness, meanness, and disinhibition and the definition imposes no restrictions on the correlation among the three traits. Boldness could be positively or negatively correlated with meanness or they could be independent. In fact, models of normal personality would predict that these three dimensions are relatively independent because boldness is related to extraversion, meanness is related to low agreeableness and disinhibition is related to low conscientiousness and these three broader traits are independent. As a result, the definition of psychopathy as a combination of three relatively independent traits implies that psychopaths are characterized by high levels on all three traits. This definition raises questions about the combination of information about the three traits to produce a valid score that reflects psychopathy. However, in practice scores on these dimensions are often averaged without a clear rational for this scoring method.
Patrick’s (2022) review also points out that multiple measures aim to measure psychopathy with self-reports. “multiple scale sets exist for operationalizing biobehavioral traits corresponding to boldness, disinhibition, and meanness in the modality of self-report (denoted in Figure 3 by squares labeled with subscript-numbered S’s)” (p. 405). It is symptomatic for the lack of measurement theories that Patrick uses the term operationalize instead of measurement because psychometricians have rejected the notion of operational measurement over 50 years ago (Chronbach & Meehl, 1955). The problem with operationalism is that every measure is by definition a valid measure of a construct because the construct is essentially defined by the measurement instrument. Accordingly, a psychopathy measure is a valid measure of psychopathy and if different measures produce different scores, they simply measure different forms of psychopathy. However, few researchers would be willing to accept that their measure is just an arbitrary collection of items without a claim to measure something that exists independent of the measurement instrument. Yet, they also fail to provide evidence that their measure is a valid measure of psychopathy.
Here, I examine the construct validity of one self-report measure of psychopathy using the open data shared by the authors who used this measure, namely the 18-item short form of the Elementary Psychopathy Assessment (EPA. Lynam, Gaughan, Miller, Miller, Mullins-Sweatt, & Widiger, 2011; Collison, Miller, Gaughanc, Widiger, & Lynam, 2016). The data were provided by Rose, Crowe, Sharpe, Til, Lynam, & Miller, 2022).
Rose et al.’s description of the EPA is brief.
The EPA-SSF (Collison et al., 2016) yields a total psychopathy score (alpha = .70/.77) as well as scores for each of three subscales: Antagonism (alpha = 61/.72), Emotional Stability (alpha = .66/.65), and Disinhibition (alpha = .68/.71).
The description suggests that the measure aims to measure psychopathy as a combination of three traits, although boldness (high Extraversion) is replaced with Emotional Stability (Low Neuroticism).
Based on their empirical findings, Rose et al. (2022) conclude that two of the three traits predict the negative outcomes that are typically associated with psychopathy. “It is the ATM Antagonism and Impulsivity [Disinhibition] domains that are most responsible for psychopathy, narcissism, and Machiavellianism’s more problematic correlates – antisocial behavior, substance use, aggression, and risk taking” (p. 10). In contrast, emotional stability/boldness are actually beneficial. “Conversely, the Emotional Stability and Agency factors are more responsible for the more adaptive aspects including self-reported political and interpersonal skill” (p. 11).
This observation might be used to modify and construct of narcissism in an iterative process known as construct validation (Cronbach & Meehl, 1955). Accordingly, disconfirming evidence can be attributed to problems with a measure or problems with a construct. In the present case, the initial assumption appears to be that psychopaths have to be low in Neuroticism or bold to commit horrible crimes. Yet, the evidence suggests that there also can be neurotic psychopaths who are violent and may the cause of violence is a combination of high neuroticism (especially anger) and low conscientiousness (lack of impulse control). We might therefore limit the construct of psychopathy to low agreeableness and low conscientiousness, which would be consistent with some older models of psychopathy (van Kampen, 2009). Even this definition of psychopathy can be critically examined given the independence of these two traits. If the actual personality factors underlying anti-social behaviors are independent, we might want to focus on these independent causes. The term psychopath would be akin to the word girl that simply describes the combination of two independent traits; disagreeable and impulsive or young and female. The term psychopath does not add anything to the theoretical understanding of anti-social behaviors because it is defined as nothing more than being mean and impulsive.
Does the EPA-SSF measure Antagonism, Emotional Stability, Disinhibition
The EPA was based on the assumption that Psychotocism is related to 18 specific personality traits and that these 18 traits are related to four of the Big Five dimensions. Empirical evidence supported this assumption. Five traits were related to low Neuroticism, namely Unconcerned, Self-Contentment, Self-Contentment, Self-Assurance, Impulsivity, and Invulnerability, and one was related to high Neuroticism (Anger). Evidently, a measure that combines items that reflect the high and low pole of factor is not a good measure of the factor. Another problem is that several of these scales had notable secondary loadings on other Big Five factors. Anger loaded more strongly and negatively on Agreeableness than on Neuroticism. and Self-Assurance loaded more highly on Extraversion. Thus, it is a problem to refer to the Emotional Stability scale as a measure of Emotional Stability. If the theoretical model assumes that Emotional Stability is a component of Psychoticism, it would be sufficient to use a validated measure of Emotional Stability to measure this component. Presumably, the choice of different items was motivated by the hypothesis that the specific item content of the EPA scales adds to the measurement of psychoticism. In this case, however, it is misleading to ignore this content in the description of the measure and to focus on the shared variance among items.
Another six items loaded negatively on Agreeableness, namely Distrust, Manipulation, Self-Centeredness, Opposition, Arrogance, and Callousness. The results showed that these six items were good indicators of Agreeableness. A minor problem is to call this scale antagonism, which is a common term among personality disorder researchers. It is also a general understanding that Antagonism and Agreeableness are strongly negatively correlated without any evidence of discriminant validity. Thus, it may be confusing to label this factor by a different name, when this name merely refers to the low end of Agreeableness (Disagreeableness). Aside from this terminological confusion, it is a question whether the specific item content of the Antagonism scale adds to the definition of psychoticism. For example, the item “I could make a living as a con artist” may not just be a measure of agreeableness, but also measure specific aspects of psychoticism.
Another three constructs were clearly related to low conscientiousness, namely Disobliged, Impersistence, and Rashness. A problem occurs when these constructs are measured with a single item because exploratory factor analysis may fail to identify factors that have only three indicators, especially when factors are not independent. Once again, calling this factor Disinhibition can create confusion if it is not stated clearly that Disinhibition is merely a label for low Conscientiousness.
Most surprising is the finding that the last three constructs were unrelated to the three factors that are supposed to be captured with the EPA. Coldness was related to low Extraversion and low Agreeableness. Dominance was related to high Extraversion and low Agreeableness. Finally, Thrill-Seeking had low loadings on all Big Five factors. It is not clear why these items would be retained in a measure of psychoticism unless it is assumed that the specific content of these scales adds to the measurement and therefore the operational definition of psychoticism.
In conclusion, the EPA is based on a theory that psychoticism is a multi-dimensional construct that reflects the influence of 18 narrow personality traits. Although these narrow traits are not independent and are related to four of the Big Five factors, the EPA psychoticism scale is not identical to a measure that combines Emotional Stability, low agreeableness, and Low Conscientiousness.
Lynam et al. (2011) also examined how the 18 scales of the EPA are related to other measures of anti-social behaviors. Most notable, all of the low Neuroticism scales showed no relationship with anti-social behavior. The only Neuroticism-related scale that was a predictor was Anger, but Anger not only reflects high Neuroticism, but also low Agreeableness. These results raise questions about the inclusion of Emotional Stability in the definition of Psychoticism. Yet, the authors conclude “overall, the EPA appears to be a promising new instrument for assessing the smaller, basic units of personality that have proven to be important to the construct of psychopathy across a variety of epistemological approaches” (p. 122). It is unclear what evidence could have changed the authors mind that their newly created measure is not a valid measure of psychoticism or that their initial speculation about the components of psychoticism was wrong. The use of an 18-item scale in 2022 shows that the authors have never found evidence to revise their theory of psychoticism or improved the measure of psychoticism. This is therefore important to critically examine the construct validity of the EPA from an independent perspective. I focus on the 18-item EPA-SSF because this scale was used by Rose et al. (2022) and I was able to use their open data.
Collins et al. (2016) conducted exploratory factor analyses with Promax rotation to examine the factor structure of the 18-item EPA-SSF. Although Lynam et al. (2011) demonstrated that items were related to four of the Big Five dimensions, they favored a three-factor solution. The problem of exploratory analysis is that they provide no evidence of the fit of a model to the data. Another problem is that factor solutions are atheoretical and influenced by item selection and arbitrary rotations. This might explain why the factor solution did not identify the expected factors. I conducted a replication of Collins’s EFAs with Rose et al.’s (2022) data from Study 1 and Study 2. I conducted these analyses in MPLUS, which provides fit indices that can be used to evaluate the fit of a model to the data. I used the Geomin rotation because this default method produces more fit indices and the corresponding fit index (RMSEA) is the same. Evidently, this choice of a rotation method has no influence on the validity of the results because neither of these rotation methods is based on substantive theory about Psychoticism.
The results are consistent across the two datasets. RMSEA and CFI favor 5-factors, while the criterion that favors parsimony the most, BIC, favors 4 factors. A three-factor model does not have bad fit, but it does fail to capture some of the structure in the data.
To examine the actual factor structure. I first replicated Collins et al.’s EFA using a three-factor structure and Promax rotation. Factor loadings greater than .4 (16% explained variance) are highlighted. The results show that the disinhibition factor is clearly identified and all five items have notable (> .4) loadings on this factor. In contrast, only three items (Coldness, Callous, & Self-Centered) have consistent loadings on the Antagonism factor. The Emotional Stability factor is not identified in the first replication sample because factor 3 shows high loadings for the Extraversion items. The variability of factor loading patterns across datasets may be caused by the arbitrary rotation of factors.
It is unclear why the authors did not use Confirmatory Factor Analysis to test their a priori theory that the 18 items represent different facets of Big Five factors. Rather than relying on arbitrary statistical criteria, CFA makes it possible to examine whether the pattern of correlation is consistent with a substantive theory. Using Collins et al.’s correlations with the Big Five, I fitted a CFA model with four factors to the data. The loading pattern was specified based on Lynam et al.’s (2011) pattern of correlations with a Big Five measure. Correlations greater than .3 were used to allow for a free parameter.
Fit of this model did not meet standard criteria of acceptable model fit (CFI > .95, RMSEA < .06), but it was not terrible, CFI = .728, RMSEA = .088. 29 of the 33 free parameters were statistically significant at p < .05 and many were significant at p < .001. 20 of the coefficients were greater than .3. It is expected that effect sizes are a bit smaller because the indicators were single items and replication studies are expected to show some regression to the mean due to selection effects. Overall, these results show similarity between Lynam et al.’s (2011) results and the pattern of correlations in the replication study.
The next step was to build a revised model to improve fit. The first step was to add a general evaluative factor to the model. Numerous studies of self-ratings of the Big Five and personality disorder instruments have demonstrated the presence of this factor. Adding a general evaluative factor to the model improved model fit, but it remained below standard criteria of acceptable model fit, CFI = .790, RMSEA = .078.
I then added additional parameters that were suggested by large modification indices. First, I added a loading for Impersistence on Extraversion. This loading was just below the arbitrary cut-off value of 30 in Lynam et al.’s study (r = .29). Another suggested parameter was a loading of Invulnerability on Extraversion (Lynam r = .18). A third parameter was a negative loading of Self-Assurance on Agreeableness. This loading was r = .00 in Lynam et al.’s (2011) study, but this could be due to the failure to control for evaluative bias that inflates rates on Self-Assurance and Agreeableness items (Anusic et al., 2009). Another suggested parameter was a positive loading of Opposition on Neuroticism (Lynam r = .18). These modifications improved model fit, but were not sufficient to achieve an acceptable RMSEA value, CFI = .852, RMSEA = .066. I did not add additional parameters to avoid overfitting the model.
The next step was to fit these two models to Rose et al.’s second dataset. The direct replication of Lynam et al.’s (2011) structure did not fit the data well, CFI = .744, RMSEA = .090, whereas fit of the modified model with the general factor was even better than in Study 1, CFI = .886, RSMEA = .062, and RMSEA was close to the criterion for acceptable fit (.060). These results show that I did not overfit the data. I tried further improvements, but suggested parameters were not consistent across the two datasets.
In the final step, I deleted free parameters that were not significant in both datasets. Surprisingly, the Disobliged and Impersistence items did not load on Conscientiousness. This suggests some problems with these single item indicators rather than a conceptual problem because these constructs have been related to Conscientiousness in many studies. Self-contentment did not load on Conscientiousness either. Distrust was not related to Extraversion, and Disobliged was not related to Neuroticism. I then fitted this revised model to the combined dataset. This model had acceptable fit based on RMSEA, CFI = .887, RMSEA = 058.
This final model captures the main structure of the correlations among the 18 EPA-SSF items and is consistent with Lynam et al.’s (2011) investigation of the structure by correlating EPA scales with a Big Five measure. It is also consistent with measurement models that show a general evaluative factor in self-ratings. Thus, I am proposing this model as the first validated measurement model of the EPA-SSF. This does not mean that it is the best model, but critics have to present a plausible model that fits the data as well or better. It is not possible to criticize the use of CFA because CFA is the only method to evaluate measurement models. Exploratory factor analysis cannot confirm or disconfirm theoretical models because EFA relies on arbitrary statistical rules that are not rooted in substantive theories. As I showed, EFA led to the proposal of a three-factor model that has poor fit to the data. In contrast, CFA confirmed that the 18 EPA-SSF items are related to four of the Big Five scales. Thus, four – not three – factors are needed to describe the pattern of correlations among the 18 items. I also showed the presence of a general evaluative factor that is common to self-reports of personality. This factor is often ignored in EFA models that rotate factors.
After establishing a plausible measurement model for the EPA-SSF, it is possible to link the factors to the scale scores that are assumed to measure Psychoticism, using the model indirect function. The results showed that the general factor explained most of the variance in the scale scores, r = .82, r^2 = 67%. Agreeableness/Antagonism explained only r = -.17, r^2 = 3% of the variance. This is a surprisingly low percentage given the general assumption that antagonism is a core personality predictor of anti-social behaviors. Conscientiousness/Disinhibition was a stronger predictor, but also explained less than 10% of the variance, r = -.300, r^2 = 9%. The contribution of Neuroticism and Extraversion was negligible. Thus, the remaining variance reflects random measurement error and unique item content. In short, these results raise concerns about the ability of the EPA-SSF to measure psychoticism rather than a general factor that is related to many personality disorders or may just reflect method variance in self-ratings.
I next examined predictive validity by adding measures of non-violent and violent criminal behaviors. The first model used the EPA-SSF scale to predict the shared variance of non-violent and violent crime based on the assumption that psychopathy is related to both types of criminal behaviors. The fit of this model was slightly better than the fit of the model without the crime variables, CFI = .789 vs. .854, RMSEA = .059 vs. .064. In this model, the EPA-SSF scale was a strong predictor of the crime factor, r = .56, r^2 = 32%. I then fitted a model that used the factors as predictors of crime. This model had slightly better fit than the model that used the EPA-SSF scale as predictor of time, CFI = .796 vs. .789, RMSEA = .059 vs. 059. Most importantly, neuroticism and extraversion were not significant predictors of crime, but the general factor was. I deleted the parameters for neuroticism and extraversion from the model. This further increased model fit, CFI = .802, RMSEA = .058. More important, the three factors explained more variance in the crime factor than the EPA-SSF scale, R = .70, R^2 = 49%. There were no major modification indices suggesting that unique variance of the items contributed to the prediction of crime. Nevertheless, I examined a model that only used the general factor as predictor and added items if they explained additional variance in the crime factor akin to stepwise regression. This model selected four specific items and explained 44% of the variance. The items were Manipulativeness (“I could have my life as a con-artist), b = .22, Self-Centeredness (“I have more important things to worry about than other people’s feelings”), b = .25, and thrill-seeking (“I like doing things that are risky or dangerous”), b = .39.
Dimensional Models of Psychopathy
The EPA-SSF is a dimensional measure of psychopathy. Accordingly, higher scores on the EPA-SSF scale reflect more severe levels of psychopathy. Dimensional models have the advantage that they do not require validation of some threshold that distinguishes normal personality variation from pathological variation. However, this advantage comes with the disadvantage that there is no clear distinction between low agreeableness (normal & healthy) and psychopathy (abnormal & unhealthy). Another problem is that the multi-dimensional nature of psychopathy makes it difficult to assess psychopathy. To illustrate, I focus on the key components of psychopathy, namely antagonism (disagreeableness) and disinhibition (low conscientiousness). One possible way to define psychopathy in relationship to these two components would be to define psychopathy as being high on both dimensions. Another one would be to define it with an either/or rule, assuming that each dimension alone may be pathological. A third option is to create an average, but this definition has the problem that the average of two independent dimensions no longer captures all of the information about the components. As a result, the average will be a weaker predictor of actual behavior. This is a problem of sum score definitions such as socio-economic status that averages income and education and reduces the amount of variance that can be explained by income and education independently.
One way to test the definition of psychopathy as being high in antagonism and disinhibition is to examine whether the two factors interact in the prediction of criminal behaviors. Accordingly, crimes are most likely to be committed by individuals who are both antagonistic and disinhibited, whereas each dimension alone is only a weak predictor of crime. I fitted a model with an interaction term as predictor of the crime factor. The interaction effect was not significant, b = .04, se = .14, p = .751. Thus, there is presently no justification to define psychopathy as a combination of antagonism and disinhibition. Thus, psychoticism appears to be better defined as being either antagonistic or disinhibited to such an extent that individuals engage in criminal or other harmful behaviors. Yet, this definition does not really add anything to our understanding of personality and criminal behavior. It is like the term infection that may refer to a viral or bacterial infection.
The Big Five Facets and Criminal Behavior
Investigation of the construct validity of the EPA-SSF showed that the 18-items reflect four of the Big Five dimensions and that two of the Big Five factors predicted criminal behavior. However, the 18-items are poor indicators of the Big Five factors. Fortunately, Rose et al. (2022) also included a 120 -item Big Five measure that also measures 30 Big Five facets (4 items per scale). It is therefore possible to examine the personality predictors of criminal behaviors with a better instrument to measure personality. To do so, I first fitted a measurement model to the 30 facet scales. This model was informed by previous CFA analyses of the 30 facets. Most importantly, the model included a general evaluative factor that was independent of the Big Five factors. I then added the items about nonviolent and violent crime and created a factor for the shared variance. Finally, I added the three EPA-SSF items that appeared to predict variance in the crime factor. I also related these items to the facets that predicted variance in these items. The final model had acceptable fit according to the RMSEA criterion (< .006), RMSEA = .043, but not the CFI criterion (> .95), CFI = .874, but I was not able to find meaningful ways to improve model fit.
The personality predictors accounted for 61% of the variance in the crime factor. This is more variance than the EPA-SSF factors explained. The strongest predictor was the general evaluative or halo factor, b = -.49, r^2 = 24%. Surprisingly, the second strongest predictor was the Intellect facet of Openness and the relationship was positive, b = .41, r^2 = .17%. More expected was a significant contribution of the Compliance facet of Agreeableness, b = -.31, r^2 = 9%. Finally, the unique variance in the three EPA-SSF items (controlling for evaluative bias and variance explained by the 30 facets) added another 10% explained variance, r = .312.
These results further confirm that Emotional Stability is not a predictor of crime, suggesting that it should not be included in the definition of psychopathy. These results also raise questions about the importance of disinhibition. Surprisingly, conscientiousness was not a notable predictor of crime. It is also notable that Agreeableness is only indirectly related to crime. Only the Compliance facet was a significant predictor. This means that disagreeableness is only problematic in combination with other unidentified factors that make disagreeable people non-compliant. As a result, it is problematic to treat the broader agreeableness/antagonism factor as a disorder. Similarly, all murders are human, but we would not consider being human a pathology.
Discussion
Concerns about the validity of psychological measures led to the creation of a taskforce to establish scientific criteria of construct validity (Cronbach & Meehl, 1955). The key recommendation was to evaluate construct validity within a nomological net. A nomological net aims to explain a set of empirical findings related to a measure in terms of a theory that predicts these relationships. Psychometricians developed structural equation modeling (SEM) that make it possible to test nomological nets. Here, I used structural equation modeling to examine the construct validity of the Elemental Psychopathy Assessment – Super Short Form scale.
My examination of the psychometric properties of this scale raise serious questions about its construct validity. The first problem is that the scale was developed without a clear definition of psychopathy. The measure is based on the hypothesis that psychoticism is related to 18 distinct, maladaptive personality traits (Lynam et al., 2011). This initial assumption could have led to a program of validation research that could have suggested revisions to this theory. Maybe some traits were missing or unnecessary. However, the measure and its short-form have not been revised. This could mean that Lynam et al. (2011) discovered the nature of psychopathy in a strike of genius or that Lynam et al. failed to test the construct validity of the EPA. My analyses suggest the latter. Most importantly, I showed that there is no evidence to include Emotional Stability in the definition and measurement of psychopathy.
I am not the first to point out this problem of the EPA. Collins et al. (2016) discuss the inclusion of Emotional Stability in a definition of psychopathy at length.
it may seem counter-intuitive that Emotional Stability would be included as a factor of the super-short form (and other EPA forms). We do not believe its inclusion is inconsistent with our previous positions as the EPA was developed, in part, from clinical expert ratings of personality traits of the prototypical psychopath. Given that many of these ratings came from proponents of the idea that FD is a central component of psychopathy, it is natural that traits resembling FD, or emotional stability, would be present in the obtained profiles. While we present Emotional Stability as a factor of the EPA measure, however, we do not claim Emotional Stability to be a central feature of psychopathy. Its relatively weak relations to other measures of psychopathy and external criteria traditionally related with psychopathy support this argument (Gartner, Douglas, & Hart, 2016; Vize, Lynam, Lamkin, Miller, & Pardini, 2016).” (p. 2016).
Yet, Rose et al. (2022) treat the EPA-SSF as if it is a valid measure of psychopathy and make numerous theoretical claims that rely on the assumption that the EPA-SSF is a valid measure of psychopathy. It is of course possible to define psychopathy in terms of low neuroticism, but it should be made clear that this definition is stipulative and cannot be empirically tested. The construct that being measured is an artifact that is created by the researchers. While neuroticism is a construct that describes something in the real world (some people are more anxious than others), psychoticism is merely a list of traits. Some people may want to include psychoticism on the list and others may not. The only problem is when term psychoticism is used for different lists. The EPA scale is best understood as a measure of 18 traits. We may call this psycoticism-18 to distinguish it from other constructs and measures of psychoticism.
The list definition of psychological constructs creates serious problems for the measurement of these constructs because list theories imply that a construct can be defined in terms of its necessary and sufficient components. Accordingly, a psychopath could be somebody who is high in Emotional Stability, low in Agreeableness, and low in Conscientiousness, or somebody who possess all of the specific traits included in the definition of Psychoticism-18. However, traits are continuous constructs and it is not clear how individual profiles should be related to quantitative variation in psychoticism. Lynam et al. sidestepped this problem by simply averaging across the profile scores and to treat this sum score as a measure of psychoticism. However, averaging results in a loss of information and the sum score depends on the correlations among the traits. This is not a problem when the intended construct is the factor that produces the correlations, but it is a problem when the construct is the profile of trait scores. As I showed, the sum score of the 18 EPA-SSF items mainly reflects information about the variance that is shared among all 18 items, which reflects a general evaluative factor. This general factor is not mentioned by Lynam et al. and is clearly not the intended construct that the EPA-SSF was intended to measure. Thus, even if psychopathy were defined in terms of 18 specific traits, the EPA-SSF sum score does not provide information about psychopathy because the actual information that the items were supposed to be measured is destroyed by averaging them.
Conclusion
In conclusion, I am not an expert on personality disorders or psychopathy. I don’t know what psychopathy is. However, I am an expert on psychological measurement and I am able to evaluate construct validity based on the evidence that authors’ of psychological measures provide. My examination of the construct validity of the EPA-SSF using the authors own data makes it clear that the EPA-SSF lacks construct validity. Even if we follow the authors proposal that psychopathy can be defined in terms of 18 specific traits, the EPA-SSF sum score fails does not capture the theoretical construct. If you would take this test and get a high score, it doesn’t mean you are a psychopath. More importantly, research findings based on this measure do not help us to explore the nomological network of psychopathy.