“This study found that data sharing among psychologists is driven primarily by their perceptions of community benefits, academic reciprocity and the norms of data sharing. This study also found that academic reciprocity is significantly influenced by psychologists’ perceptions of community benefits, academic reputation and the norms of data sharing. Both academic reputation and academic reciprocity are affected by psychologists’ prior experiences with data reuse. Additionally, psychologists’ perceptions of community benefits and the norms of data sharing are significantly affected by the perception of their academic reputation.”
Category Archives: oa.psychology
Open Educational Resources in Psychology: A Starter Pack
A collection of psychology OER, gathered by y Rajiv Jhangiani.
Data sharing upon request and statistical consistency errors in psychology: A replication of Wicherts, Bakker and Molenaar (2011) | PLOS ONE
Abstract: Sharing research data allows the scientific community to verify and build upon published work. However, data sharing is not common practice yet. The reasons for not sharing data are myriad: Some are practical, others are more fear-related. One particular fear is that a reanalysis may expose errors. For this explanation, it would be interesting to know whether authors that do not share data genuinely made more errors than authors who do share data. (Wicherts, Bakker and Molenaar 2011) examined errors that can be discovered based on the published manuscript only, because it is impossible to reanalyze unavailable data. They found a higher prevalence of such errors in papers for which the data were not shared. However, (Nuijten et al. 2017) did not find support for this finding in three large studies. To shed more light on this relation, we conducted a replication of the study by (Wicherts et al. 2011). Our study consisted of two parts. In the first part, we reproduced the analyses from (Wicherts et al. 2011) to verify the results, and we carried out several alternative analytical approaches to evaluate the robustness of the results against other analytical decisions. In the second part, we used a unique and larger data set that originated from (Vanpaemel et al. 2015) on data sharing upon request for reanalysis, to replicate the findings in (Wicherts et al. 2011). We applied statcheck for the detection of consistency errors in all included papers and manually corrected false positives. Finally, we again assessed the robustness of the replication results against other analytical decisions. Everything taken together, we found no robust empirical evidence for the claim that not sharing research data for reanalysis is associated with consistency errors.
Open science in health psychology and behavioral medicine: A statement from the Behavioral Medicine Research Council.
Abstract: Open Science practices include some combination of registering and publishing study protocols (including hypotheses, primary and secondary outcome variables, and analysis plans) and making available preprints of manuscripts, study materials, de-identified data sets, and analytic codes. This statement from the Behavioral Medicine Research Council (BMRC) provides an overview of these methods, including preregistration; registered reports; preprints; and open research. We focus on rationales for engaging in Open Science and how to address shortcomings and possible objections. Additional resources for researchers are provided. Research on Open Science largely supports positive consequences for the reproducibility and reliability of empirical science. There is no solution that will encompass all Open Science needs in health psychology and behavioral medicine’s diverse research products and outlets, but the BMRC supports increased use of Open Science practices where possible.
Registered report: Survey on attitudes and experiences regarding preregistration in psychological research | PLOS ONE
Abstract: Background
Preregistration, the open science practice of specifying and registering details of a planned study prior to knowing the data, increases the transparency and reproducibility of research. Large-scale replication attempts for psychological results yielded shockingly low success rates and contributed to an increasing demand for open science practices among psychologists. However, preregistering one’s studies is still not the norm in the field. Here, we conducted a study to explore possible reasons for this discrepancy.
Methods
In a mixed-methods approach, we conducted an online survey assessing attitudes, motivations, and perceived obstacles with respect to preregistration. Respondents (N = 289) were psychological researchers that were recruited through their publications on Web of Science, PubMed, PSYNDEX, and PsycInfo, and preregistrations on OSF Registries. Based on the theory of planned behavior, we predicted that positive attitudes (moderated by the perceived importance of preregistration) as well as a favorable subjective norm and higher perceived behavioral control positively influence researchers’ intention to preregister (directional hypothesis 1). Furthermore, we expected an influence of research experience on attitudes and perceived motivations and obstacles regarding preregistration (non-directional hypothesis 2). We analyzed these hypotheses with multiple regression models and included preregistration experience as a control variable.
Results
Researchers’ attitudes, subjective norms, perceived behavioral control, and the perceived importance of preregistration significantly predicted researchers’ intention to use preregistration in the future (see hypothesis 1). Research experience influenced both researchers’ attitudes and their perception of motivations to preregister, but not the perception of obstacles (see hypothesis 2). Descriptive reports on researchers’ attitudes, motivations and obstacles regarding preregistration are provided.
Discussion
Many researchers had already preregistered and had a rather positive attitude toward preregistration. Nevertheless, several obstacles were identified that may be addressed to improve and foster preregistration.
New journals seek to fill neurodiversity gap | Spectrum | Autism Research News
Overview of two new journals on neurodiversity.
Inaccuracy in the Scientific Record and Open Postpublication Critique – Chris R. Brewin, 2023
Abstract: There is growing evidence that the published psychological literature is marred by multiple errors and inaccuracies and often fails to reflect the changing nature of the knowledge base. At least four types of error are common—citation error, methodological error, statistical error, and interpretation error. In the face of the apparent inevitability of these inaccuracies, core scientific values such as openness and transparency require that correction mechanisms are readily available. In this article, I reviewed standard mechanisms in psychology journals and found them to have limitations. The effects of more widely enabling open postpublication critique in the same journal in addition to conventional peer review are considered. This mechanism is well established in medicine and the life sciences but rare in psychology and may assist psychological science to correct itself.
What’s in a Badge? A Computational Reproducibility Investigation of the Open Data Badge Policy in One Issue of Psychological Science – Sophia Crüwell, Deborah Apthorp, Bradley J. Baker, Lincoln Colling, Malte Elson, Sandra J. Geiger, Sebastian Lobentanzer, Jean Monéger, Alex Patterson, D. Samuel Schwarzkopf, Mirela Zaneva, Nicholas J. L. Brown, 2023
Abstract: In April 2019, Psychological Science published its first issue in which all Research Articles received the Open Data badge. We used that issue to investigate the effectiveness of this badge, focusing on the adherence to its aim at Psychological Science: sharing both data and code to ensure reproducibility of results. Twelve researchers of varying experience levels attempted to reproduce the results of the empirical articles in the target issue (at least three researchers per article). We found that all 14 articles provided at least some data and six provided analysis code, but only one article was rated to be exactly reproducible, and three were rated as essentially reproducible with minor deviations. We suggest that researchers should be encouraged to adhere to the higher standard in force at Psychological Science. Moreover, a check of reproducibility during peer review may be preferable to the disclosure method of awarding badges.
What’s in a Badge? A Computational Reproducibility Investigation of the Open Data Badge Policy in One Issue of Psychological Science – Sophia Crüwell, Deborah Apthorp, Bradley J. Baker, Lincoln Colling, Malte Elson, Sandra J. Geiger, Sebastian Lobentanzer, Jean Monéger, Alex Patterson, D. Samuel Schwarzkopf, Mirela Zaneva, Nicholas J. L. Brown, 2023
Abstract: In April 2019, Psychological Science published its first issue in which all Research Articles received the Open Data badge. We used that issue to investigate the effectiveness of this badge, focusing on the adherence to its aim at Psychological Science: sharing both data and code to ensure reproducibility of results. Twelve researchers of varying experience levels attempted to reproduce the results of the empirical articles in the target issue (at least three researchers per article). We found that all 14 articles provided at least some data and six provided analysis code, but only one article was rated to be exactly reproducible, and three were rated as essentially reproducible with minor deviations. We suggest that researchers should be encouraged to adhere to the higher standard in force at Psychological Science. Moreover, a check of reproducibility during peer review may be preferable to the disclosure method of awarding badges.
What’s in a Badge? A Computational Reproducibility Investigation of the Open Data Badge Policy in One Issue of Psychological Science – Sophia Crüwell, Deborah Apthorp, Bradley J. Baker, Lincoln Colling, Malte Elson, Sandra J. Geiger, Sebastian Lobentanzer, Jean Monéger, Alex Patterson, D. Samuel Schwarzkopf, Mirela Zaneva, Nicholas J. L. Brown, 2023
Abstract: In April 2019, Psychological Science published its first issue in which all Research Articles received the Open Data badge. We used that issue to investigate the effectiveness of this badge, focusing on the adherence to its aim at Psychological Science: sharing both data and code to ensure reproducibility of results. Twelve researchers of varying experience levels attempted to reproduce the results of the empirical articles in the target issue (at least three researchers per article). We found that all 14 articles provided at least some data and six provided analysis code, but only one article was rated to be exactly reproducible, and three were rated as essentially reproducible with minor deviations. We suggest that researchers should be encouraged to adhere to the higher standard in force at Psychological Science. Moreover, a check of reproducibility during peer review may be preferable to the disclosure method of awarding badges.
Opportunities, challenges and tensions: Open science through a lens of qualitative social psychology – Pownall – British Journal of Social Psychology – Wiley Online Library
Abstract: In recent years, there has been a focus in social psychology on efforts to improve the robustness, rigour, transparency and openness of psychological research. This has led to a plethora of new tools, practices and initiatives that each aim to combat questionable research practices and improve the credibility of social psychological scholarship. However, the majority of these efforts derive from quantitative, deductive, hypothesis-testing methodologies, and there has been a notable lack of in-depth exploration about what the tools, practices and values may mean for research that uses qualitative methodologies. Here, we introduce a Special Section of BJSP: Open Science, Qualitative Methods and Social Psychology: Possibilities and Tensions. The authors critically discuss a range of issues, including authorship, data sharing and broader research practices. Taken together, these papers urge the discipline to carefully consider the ontological, epistemological and methodological underpinnings of efforts to improve psychological science, and advocate for a critical appreciation of how mainstream open science discourse may (or may not) be compatible with the goals of qualitative research.
Experimentology: An Open Science Approach to Experimental Psychology Methods
“How do we create generalizable theories of human behavior? Experiments provide us a tool for measuring causal effects, which provide the basis for building theories. If we design our experiments appropriately, we can even begin to estimate generalizable relationships between different psychological constructs. But how do you do an experiment?
This book provides an introduction to the workflow of the experimental researcher in the psychological sciences. The organization is sequential, from the planning stages of the research process through design, data collection, analysis, and reporting. We introduce these concepts via narrative examples from a range of sub-disciplines, including cognitive, developmental, and social psychology. Throughout, we also illustrate the pitfalls that led to the “replication crisis” in psychology. To help researchers avoid these pitfalls, we advocate for an open-science based approach, providing readers with guidance for preregistration, project management, data sharing, and reproducible writing….”
Ten Strategies to Foster Open Science in Psychology and Beyond | Collabra: Psychology | University of California Press
Abstract: The scientific community has long recognized the benefits of open science. Today, governments and research agencies worldwide are increasingly promoting and mandating open practices for scientific research. However, for open science to become the by-default model for scientific research, researchers must perceive open practices as accessible and achievable. A significant obstacle is the lack of resources providing a clear direction on how researchers can integrate open science practices in their day-to-day workflows. This article outlines and discusses ten concrete strategies that can help researchers use and disseminate open science. The first five strategies address basic ways of getting started in open science that researchers can put into practice today. The last five strategies are for researchers who are more advanced in open practices to advocate for open science. Our paper will help researchers navigate the transition to open science practices and support others in shifting toward openness, thus contributing to building a better science.
NIMH » NIMH Creates Publicly Accessible Resource With Data From Healthy Volunteers
“Studying healthy people can help researchers understand how the brain works in states of health and illness. Although many mental health studies include healthy participants as a comparison group, these studies typically focus on selected measures relevant to a certain functional domain or specific mental illness. The Healthy Research Volunteer Study at the National Institute of Mental Health aims to build a comprehensive, publicly accessible resource with a range of brain and behavioral data from healthy volunteers.
This resource aims to shed light on basic questions about brain function and translational questions about the relationship between brain and behavior. Although the study focuses on healthy volunteers, the data also have relevance to clinical questions about neurobiological, cognitive, and emotional processes associated with certain mental illnesses.
The NIMH Healthy Research Volunteer Study is unique in the breadth and depth of its measures. All data collected as part of the study are anonymized and shared with the research community via the OpenNeuro repository….”
About Meta-Psychology
“Meta-Psychology publishes theoretical and empirical contributions that advance psychology as a science through critical discourse related to individual articles, research lines, research areas, or psychological science as a field. Important contributions include systematic reviews, meta-analyses, replicability reports, and replication studies. We encourage pre-registered studies and registered reports (i.e., peer-review on the basis of theory, methods, and planned data-analysis, before data has been collected). Manuscripts introducing novel methods are welcome, but also tutorials on established methods that are still poorly understood by psychology researchers. We further welcome papers introducing statistical packages or other software useful for psychology researchers….”