An interview with Ben Brown, Guest Editor of the PLOS ONE-COS Cognitive Psychology Collection

PLOS ONE, in collaboration with the Center for Open Science, recently launched a Cognitive Psychology Collection. It includes submissions to a Call for Papers in cognitive developmental psychology across the lifespan, with an emphasis on open science—transparent reporting practices such as pre-registration or iterative registration; data, code, and material sharing; and preprint posting. 

Ben Brown was one of three Guest Editors for this project, along with Nivedita Mani and Ramesh Kumar Mishra. Ben is Associate Professor of Psychology at Georgia Gwinnett College in Georgia, USA. Ben’s research interests are in developmental psychology: he has worked on autobiographical memory in populations, for instance in populations with autism spectrum disorder, and on children’s susceptibility to suggestion. 

Benjamin Brown, Guest Editor for the Cognitive Psychology Collection

Ben also has a long-standing interest in open science and the reproducibility and replicability of psychology research: he is a founding member of PsyArXiv, the preprint repository for the psychological sciences hosted by COS, and is a Senior Editor at Collabra: Psychology, the open-access journal of the Society for the Improvement of Psychological Sciences

I asked Ben about his editorial experience for this collection and his advocacy for open science more broadly.

Can you tell us about your interest in open science, what drew you to it and how that affects your own research?

My interest in scientific rigor and transparency began during my graduate training. During this time, I struggled to replicate well-known and highly regarded findings and found myself frustrated with the lack of transparent reporting in psychological research. As a result, I was eager for opportunities to contribute to improving psychological science.

When I learned of community efforts to address these same challenges I had faced in my own work, I happily and without reservation got involved. In doing so, I found a strong sense of camaraderie with other psychologists working on the issues that I felt so isolated grappling with in graduate school.

Preregistrations, including any modifications, help reviewers contextualize results and consider matters such as researchers’ degrees of freedom. I honestly would find it difficult to go back to a more traditional editorial experience.

Ben Brown, PLOS ONE Guest Editor

Throughout my involvement in the open science movement, I have been pleasantly surprised to find that helping to enable scholars to conduct science in more open and transparent ways can be just as if not more rewarding than conducting original research itself. 

A rationale for this Cognitive Psychology Call for Papers, with its emphasis on transparent reporting and pre-registration, was to help address difficulties in recruitment and planning that are particularly relevant to that field of research. Can you tell us more about it? How do these concerns affect your editorial work more generally?

Transparent communication about the process of scientific research – recruitment, protocol, data analysis – is central to the credibility of science as a field. Unfortunately, many factors make this challenging across subdisciplines within psychology.

With regard to cognitive development, scholars working in this area are often tasked with understanding how processes and abilities change over time and doing so often necessitates responsiveness to the practical demands of samples that inherently change over the course of their involvement in a given research project. Further, measuring cognitive processes is quite challenging and trial and error is often necessary to generate sound, reliable research protocols even in the best of scenarios. This is magnified when such protocols need to be adjusted to the needs of a sample whose abilities are also growing and changing. Thus, it can be very difficult to decide at the outset of large longitudinal studies, for example, every decision that will need to be made along the course of the project and to rigidly adhere to such decisions.

Transparently describing and reporting when decisions regarding research methods and analysis were made—at study outset, during data collection, after data analysis had begun—enables others to better contextualize and understand study findings.

Ben Brown, PLOS ONE Guest Editor

Nevertheless, transparent and complete reporting remains important. Given the challenges I described, some scholars working in this area have been hesitant to adopt preregistration due to concerns that this practice may reduce their ability to be creative, flexible, and responsive to their needs or the needs of their samples. What I am so excited about with regard to preregistration, however, is that I see it as actually enabling those things but doing so in a way that improves the interpretability of research findings as well as the cumulative nature of science. Transparently describing and reporting when decisions regarding research methods and analysis were made—at study outset, during data collection, after data analysis had begun—enables others to better contextualize and understand study findings. Further, preregistration and subsequent documentations of deviations from an original plan helps other scholars working in that area better plan their own research by being able to anticipate and proactively address challenges.

SIPS logo
SIPS

I have had some previous experience editing more transparent submissions at outlets like Collabra: Psychology and find it quite refreshing. Open data and code allow for easy verification of results. Preregistrations, including any modifications, help reviewers contextualize results and consider matters such as researchers’ degrees of freedom. I honestly would find it difficult to go back to a more traditional editorial experience.

How do you think some of the papers in this Collection illustrate good open science practices that can improve rigor and reliability in psychological research? For instance, the Collection includes a Registered Report Protocol on improving the diagnostic accuracy of Alzheimer’s disease, a hotly debated research topic, or another Registered Report Protocol on a user-friendly mobile application to assess inhibitory control (see an interview with the authors of this protocol on the COS blog). What role do you think a more transparent planning and reporting process can play?

I was delighted to see the open, transparent practices exemplified by the articles in this collection. I was particularly encouraged to see the Registered Report examining Alzheimer’s disease within the collection. Like I mentioned previously, I believe that preregistrations are among the best things we can be doing as a field and research area to improve rigor and transparency.

PsyArXiv
PsyArXiv

I was also happy to be able to suggest additional ways in which contributing authors might share their science openly. Namely, I personally suggested that we encourage all submitting authors to share their manuscripts as preprints on PsyArXiv. Sharing manuscripts in this way further ensures that findings are transparently disseminated, even if the work is ultimately less appealing to publishing outlets, such as when studies report null findings or when work is considered less novel. These studies are important components of the scientific record and sharing them openly can contribute to a more complete and cumulative science.

The post An interview with Ben Brown, Guest Editor of the PLOS ONE-COS Cognitive Psychology Collection appeared first on EveryONE.

Introducing the PLOS ONE-COS Cognitive Psychology Collection


PLOS ONE and the Center for Open Science are pleased to announce the publication of a Cognitive Psychology Collection. This Collection results from a Call for Papers launched last year that invited submissions in cognitive developmental psychology across the lifespan, with an emphasis on open science practices. 

The Call for Papers’ Guest Editors Benjamin Brown (Georgia Gwinnett College), Nivedita Mani (Georg-August-Universität Göttingen), and Ramesh Kumar Mishra (University of Hyderabad) curated this Collection.

As Guest Editor Benjamin Brown pointed out in the Call for Papers, “[d]evelopmental psychology has been slower to embrace the movement towards more research transparency that has been seen in other fields of psychology in recent years. To improve the replicability of our science, it is vital as a field that we adopt more transparent research practices such as preregistration and the sharing of materials, data, code, and preprints.” 

the small steps toward transparency and best practice that we take in successive projects not only make us more confident of the results we report but also make us calmer in planning projects.

Guest Editor Nivedita Mani 

Guest Editor Nivedita Mani and Mariella Paul, postdoctoral researcher in Nivi’s department, echoed this sentiment when they recounted in an interview their journey to more transparent and reproducible science, both in their own research and for the whole field, asserting that “the small steps toward transparency and best practice that we take in successive projects not only make us more confident of the results we report but also make us calmer in planning projects.”

The Call for Papers emphasized the importance of transparency in reporting and methodological rigor in cognitive psychology, especially with hard to reach populations, high variability in responses, and reduced attention during experiments. For that reason, the call particularly welcomed submissions for pre-registered studies or manuscripts with shared codes or data. We encouraged authors to include in their submission an Open Science Framework project page (more information on how to do that here) and to submit a preprint, for instance on PsyArxiv. This Call for Papers was also PLOS ONE’s first to call for Registered Report Protocols, then a new submission format at the journal. 

The open science practices taking place across so many disciplines highlight the broad importance of shared data or peer reviewed protocols in supporting such important research.

David Mellor, Director of Policy at the Center for Open Science

At this time, the Collection includes 21 studies. They span a large range of research topics and study types, from a proof-of-concept protocol for a mobile application for inhibitory control to a metacognitive social learning strategies study, an exploration of predictors of attentional functioning profiles in children, or experiments on context effects on decision making and processing under risk for adults and adolescents. 

“The open science practices taking place across so many disciplines highlight the broad importance of shared data or peer reviewed protocols in supporting such important research,” says David Mellor, Director of Policy at the Center for Open Science. “These are important examples of how open science helps everyone in the research community.”

This Collection highlights articles that best illustrate open science practices, such as a registered report protocol on improving the diagnostic accuracy of Alzheimer’s disease or a study on infants’ language acquisition that includes its data and R analysis script on its OSF page. 

Improving transparency in reporting also means publishing null or low-effect-size results. This Collection includes for instance a study—along with its experimental materials and code—where group competition did not influence children’s collaborative reasoning, or another article—with all its stimuli, data, and analysis files on its OSF page—suggesting that preschoolers do not have specifically biological expectations about animate agents

Papers will continue to be added to the Collection as they reach publication, so we invite you to revisit the Collection again for additional insights into reproducible and transparent research in cognitive developmental psychology.

The post Introducing the PLOS ONE-COS Cognitive Psychology Collection appeared first on EveryONE.

Registered Reports: One Year at PLOS ONE


A little over a year ago, PLOS ONE launched two new submission formats: Registered Report Protocols, peer-reviewed articles that describe planned research not yet initiated, and their follow-up Registered Reports, which report the results of the completed research and which receive an in-principle acceptance when their protocol is accepted for publication. It was part of a broader push for preregistration at PLOS.

When we added these options to the list of regular submission types we consider, the format wasn’t new: about 200 journals had already considered registered reports for publication, and the number has kept increasing since. The format had initially been relatively welcomed in the behavioral sciences and then made its way, sometimes with a few tweaks, to other disciplines. Preregistration in general has even been the norm in clinical trial research for years, albeit not necessarily with peer-review. And Registered Reports weren’t even entirely new at PLOS ONE: our partnerships with the Children’s Tumor Foundation and FluLab predate this launch.

But this launch had two distinctive features: we would publish the protocol (also called “stage-1 registered report”) of all the registered reports we would consider. We would do so regardless of the eventual results of the planned research, of course, but also regardless of whether the final report (also called “stage-2 registered report”) would be submitted or even completed. The Registered Report Protocol would be its own publication, and it would be so with the standard of any PLOS publication: with our expectations of data availability and rigorous ethics oversight, and with the possibility to make the full peer-review history available. It was, as far as we were aware, a distinctively transparent publishing format offering.

Other journals already published stage-1 Registered Reports, to be sure, but not at that scale and with the disciplinary breadth that PLOS ONE provides. This was this launch’s second distinctive feature: we were relying on an academic board of thousands of members to embrace this format with a different review process and criteria on as many study types and topics as the journal would normally consider.

For the 1st time since @RegReports were created in 2013, there is now at least one journal option for every research field across the full spectrum of physical, life and social sciences.

Chris Chambers, on PLOS ONE launching Registered Reports

The Registered Report format has been adapted and implemented in many ways across hundreds of journals (for instance at PLOS Biology). We made some choices with our own format: although deviations from the published protocol could invalidate the in-principle acceptance of the final report, we would consider such deviations, provided they are acknowledged and justified. We would also welcome exploratory, unregistered, or unplanned analyses in the final report, provided they are clearly identified as such. A Registered Report Protocol is an opportunity to receive early feedback on a study; it is the opportunity to claim ownership of a research project without having to wait for results to come in; it is also a tool against publication bias that drives us all not to publish null results. Above all, we envisioned Registered Report Protocols as a a mechanism for transparency in publishing and reporting rather than an unbreakable and inflexible vow. 

Our choice to distinguish clearly between the protocol and its final report, on the other hand, makes our format less adaptable to serial submissions and iterative registrations (which other journals publishing Registered Reports explicitly welcome). But authors wishing to do so with PLOS ONE can submit subsequent iterations of their registration (i.e., after the first follow-up to a published Registered Report Protocol) as regular research articles. But with that caveat, we wanted a format that is relatively flexible and that could be suited to as many study types and fields as we normally consider.

So what can we say a year later? We have received over 300 Registered Report Protocol submissions, about 60 of which are already published or accepted for publication (the first Registered Report Protocol was published in June of last year), by first authors from more than 20 countries. These submissions have acceptance and rejection rates comparable to our regular submissions. They cover many disciplinary areas: about 70% of the submissions are in medicine and health sciences, 15% in the behavioral and social sciences, and 8% in the life sciences. A call for papers in cognitive psychology, launched last fall in collaboration with the Center for Open Science, invited Registered Report Protocol submissions. Finally, we have already received a few follow-up Registered Report submissions. If and when we publish these stage-2 Registered Reports, they will be interlinked with their corresponding protocols so readers can easily navigate between them. 

The Registered Report Protocol submissions we received this past year are now published protocols for a systematic review on the effectiveness of public health interventions against COVID-19, a psychology survey study on trust in international relations, an animal study on neural plasticity, a study of biomedical sentence similarity measures, among others. They have been handled by a number of our Academic Editors and reviewers, many of whom were just discovering that Registered Reports were an option in their field. The journal’s editorial board members and reviewers have been instrumental in this successful rollout. As Andrew Miles, author of a published Registered Report Protocol, attested, “my research team and I benefited from careful reading by several excellent reviewers, as from an editor who pointed us to a data collection tool that we hadn’t previously been aware of.” 

Reproducibility of medical research findings has been found to be low, and Registered Reports give me the unique opportunity to describe in detail the statistical-methodological approach prior to having seen the data, and to get credit for it with respect to visibility in authorship. When we submitted our registered report to PLOS ONE we received very detailed reviewer comments, and we could improve our study design and analysis, as well as reporting. PLOS ONE publishes the [Registered Report Protocol] prior to the final study results, which has the advantage that the study can be brought to other people’s attention at a much earlier stage.

Ulrike Held, PLOS ONE Author
Is reporting quality in medical publications associated with biostatisticians as co-authors? A registered report protocol

Registered Reports are now just one of an increasing menu of publication formats. Recently, PLOS ONE launched new protocol types: Study Protocols and Lab Protocols. The Study Protocol format closely resembles that of Registered Report Protocols, but doesn’t come with an in-principle acceptance of the final report. Under the leadership of our new Editor-in-Chief Emily Chenette, PLOS ONE will continue to work with our communities to improve scientific communication, using the principles of openness, transparency, rigor, and reproducibility as guides.

The post Registered Reports: One Year at PLOS ONE appeared first on EveryONE.

Open science and cognitive psychology: An interview with Guest Editor Nivedita Mani and Mariella Paul

Nivi is Professor at University of Göttingen, Germany where she leads the “Psychology of Language” research group at the Georg-Elias-Müller Institute for Psychology. Her work examines the factors underlying word learning and recognition in young children and views word learning as the result of a dynamic mutual interaction between the environment and the learner. She is also one of the Guest Editors of an ongoing PLOS ONE Call for Papers in developmental cognitive psychology in collaboration with the Center for Open Science. This Call has a particular emphasis on reproducibility, transparency in reporting, and pre-registration.

Prof. Dr. Nivedita Mani

Mariella is a postdoctoral researcher in Nivi’s department. She is interested in how children’s interests shape their word learning, which she investigates using several methods, including EEG, online studies, and meta-analytic approaches. Mariella was one of the co-founders of the Open Science initiative at the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, where she did her PhD, and was awarded an eLife Community Ambassadorship to promote open science.

Mariella Paul

I asked them about their views on how open science affects and shapes their research and their field.

Can you tell me about your interest in open science?

MP: The first time I heard about open science and the replication crisis was during a conference I attended during my Master’s, but I only really got into it during my PhD, when I learned much more about it through academic Twitter and started to apply it to my own research. I think the ideas around open science appealed to me as a (then very) early career researcher (ECR) because they were how I, perhaps idealistically, thought science should be done. I have heard the same sentiment from bachelor’s (or undergrad) students when giving lectures about open science practices: “Why wasn’t it always done like this?”. After learning bits and pieces from Twitter and podcasts, such as ReproducibiliTea and the Black Goat, I got in touch with other ECRs at my institute and we founded an open science initiative, organized workshops for our colleagues and ourselves to learn more about open science, and eventually even started our own ReproducibiliTea journal club, where we read and discuss papers about different open science practices.

NM: My interest in open science is relatively recent. I am quite late to the party and my invitation is by virtue of the people in my lab who keep finding better ways to do science. My interest is driven by the fact that the small steps towards transparency and best practice that we take in successive projects not only makes us more confident of the results we report but also makes us calmer in planning projects. What I find interesting and quite marvelous actually, is that this trend towards greater transparency in research and reporting is being spearheaded by young researchers. That’s really amazing to me, because, as a tenured Professor, that next publication – and lingering difficulties associated with publishing null results – is not going to impact my next paycheck but it might well impact the future prospects of the young researchers who are leading this change, who nevertheless weigh doing science well equally with getting cool results!    

How does transparency in reporting affect your own research?

MP: My PhD consisted largely of conceptual replications, that is, I replicated studies previously done with infants and adults with young children. Directly building on previous studies has clearly illustrated the need for transparent reporting for me – because only with transparent reporting and shared materials one can hope to conduct a close replication. Therefore, for my own research, I aim to report my methods as transparently as possible, to make the lives of future researchers wanting to run replications or meta-analyses easier.

Photo by Markus Spiske on Unsplash

NM: I think the best thing to say for it is that it is frees you. There is, on the one hand, more acceptance these days for the publication of null results, but also, more importantly, greater appreciation for the scientific process rather than the scientific result. This makes it a much more relaxing climate to be a researcher in, since you don’t need to find that perfect result, you need only to document that you went about looking for evidence of that effect in an appropriate manner. This makes you more conscious of critically evaluating your methods prior to testing while leaving you rather calm about the result of your manipulation. So for instance in my group, we now routinely write up the Introduction, Methods and Planned analyses of a paper before we start testing. This makes us think much more about what it is we are actually testing, what we plan to analyze, whether we can conduct the analyses we hope to, and whether that analyses actually tests the hypotheses under consideration. I think this way of planning studies not only makes us methodologically rigorous but also makes us more likely to actually find meaningful effects.

Why do you think pre-registration matters in developmental cognitive psychology?

MP: I think pre-registration can be valuable for any confirmatory study, by adding transparency early during the research process, and by decreasing researchers’ analytic flexibility. In developmental cognitive psychology in particular, we deal with unique issues. For example, when working with infants and young children, data collection and drop-outs require special attention. Pre-registration can help us set some of the parameters around these issues beforehand, for example by pre-specifying transparent data-peeking and planning a correction for sequential testing. I work a lot with EEG, where we additionally have a myriad of analytic decisions to make in how to preprocess the data. Also here, pre-registration can decrease researchers’ analytic flexibility and reduce bias by making these decisions before seeing the data.

NM: Developmental research is plagued with many of the issues in cognitive science, unfortunately amplified by difficulties with regards to access to participant pools (babies are more difficult to recruit relative to undergraduate students) and resulting issues in sample size, shorter attention spans of participants (leading to shorter and less well-powered experiments) as well as greater variance in infant responding. Thinking more carefully about the study and what you actually have adequate power to do – as one is forced to with a preregistration – may help us avoid costly mistakes of running under-powered studies that eventually lead to inconclusive results. From a pragmatic point of view, preregistration, in particular, helps us to better motivate analyses choices that may be questioned later in the process – so in a recent review of a paper, we were asked why we chose a particular exclusion criterion. We did not preregister this analysis (it’s a relatively old study that is only now seeing the light of day) but based this exclusion criterion on previous work – had we preregistered this, it would have been easier for us to justify our choice of this particular exclusion criterion. As it stands now, I can see that a skeptical reviewer may be inclined to believe our choice of this exclusion criterion is post-hoc.

How does the field of developmental cognitive psychology differ now compared to 10-15 years ago, and has open science played a role in that?

MP: I have only been in the field for a few years, but even in that time, I think open science has played a role in the development of the field. For example, large-scale replication efforts such as the ManyBabies project help us better understand central findings in our field, such as infants’ preference for speech presented in a child-directed manner. Similarly, platforms such as Wordbank – an open database of children’s vocabulary – and MetaLab –an interactive tool for meta-analysis in cognitive development – are now available for everyone to run their own studies on large-scale data.

there is greater acceptance of such “failed” experiments these days and this is to a large extent due to our increased appreciation for the scientific process (including open science practices) rather than the result.

NM: To be really honest, on a personal level, I am rather shamefaced about the practices that I believed acceptable 10 years ago. For instance, 10 years ago, I posted on social media that my “failed” experiments folder was 1.5 times larger than my “successful” experiments folder. Back then, it didn’t occur to me that the failed experiments folder (null results to be precise) was as important as the published successful experiments folder – and indeed, they were not failures, because they were providing us valuable information about potential contexts in which we do not find evidence for particular effects. However, now, there is greater acceptance of such “failed” experiments these days and this is to a large extent due to our increased appreciation for the scientific process (including open science practices) rather than the result. At the same time, there is greater emphasis on correct reporting of results, which I belatedly realize, I have been on the wrong side of, by not reporting aspects of the analyses that were important to interpretation of the results. I think this is changing too, with greater awareness of what we need to report when it comes to reporting the analyses we perform.  

What do you see as the greatest challenges for the field going forward?

MP: I think with the current development of the field and psychology in general, there are many challenges as well as opportunities. For many, including myself, one of the most direct challenges recently has been the restrictions on data collection due to the pandemic. With studies in the lab, as we know them, not having been possible (or only to a very limited degree) for over half a year now, many projects needed to be delayed, and we have been forced to rethink our way of planning new experiments. However, this unique situation also offers the possibility to conduct studies that we perhaps usually would not have thought of. For example, meta-analyses of previous studies in the literature can be conducted even when the lab is closed, and so can online-studies, of course. Also, the time away from the lab can be used to get started on new open science practices. For example, a registered report can be written and submitted so that the stage 1 protocol [i.e., a Registered Report Protocol at PLOS ONE] is already accepted by the time testing can be resumed.

NM: We seem to have achieved greater understanding of the requirements of good science, but I do worry about the extent to which we can implement these requirements. How can we run well-powered studies in developmental research, given restrictions on access to population pools and infant attention span? Cross-laboratory efforts (like the ManyBabies projects or a recent project on the effect of the Covid-19 lockdown on language development that I am involved in) here may be the way forward, allowing us to pool resources across laboratories. Equally, we are looking more deeply into sequential Bayesian designs, that may potentially allow us to get around some of the problems I have mentioned (sample size, power, inconclusive results). In general, I think we need to get more inventive about how to continue doing good developmental research.

At the same time, I don’t know if we really know how to analyze our data. In asking the more critical questions that the field is asking these days, I don’t really see one correct answer – and unfortunately, I don’t feel qualified to choose one answer over another. Again, I think greater transparency in research reporting helps here, because I get to post my data and my analyses and the results that I obtained with these analyses. This allows someone else to look through my data and analyze it differently to see if the pattern holds. Having said that, I don’t also think we are where we could be with regards to this solution – at least, I know my group isn’t – with regards to how well we archive our data and how transparent it is for others to use. That is definitely going to be one of the challenges we will face going forward.

The post Open science and cognitive psychology: An interview with Guest Editor Nivedita Mani and Mariella Paul appeared first on EveryONE.

Introducing PLOS ONE’s Education Research Collection

Understanding teaching and learning—what works, how, and for whom—is an academic endeavor in its own right. This may be conducted by medical researchers who are also educators or by education scholars whose main expertise is in pedagogy or educational psychology. As PLOS ONE welcomes rigorous original research regardless of disciplinary boundaries, it is a home for education research. Our journal welcomes a variety of study designs and methods, including quantitative research, but importantly also for education research, mixed-methods and qualitative research.

The newly published Education Research Collection illustrates the breadth of contributions made in PLOS ONE to this field over the years. The Collection includes both small-scale interventions (such as on the teaching of fractions related to Common Core State standards [1]) and curriculum-wide observations (for example, of the most effective forms of active learning in biology classrooms [2]). It ranges from early childhood development (such as the assessment of the Early Childhood Environment scale [3]) to higher education faculty professional development programs (for instance, on changing teaching practices of science faculty [4]).

Credit: Kimberly Farmer

The Collection includes a variety of study types, whether randomized control trials (such as the assessment of educational tools [5]),  large longitudinal studies (such as on the long-term effects of the early childhood Chicago School Readiness Project [6]), or systematic reviews and meta-analyses (such as on the effect of child-staff ratios in early childhood education [7]). It highlights innovative programs (for instance on teaching critical thinking skills and argumentation through bioethics education [8]) and novel teacher assessment tools (for instance in medical education [9]).

This Collection also includes studies on the gender gap in STEM education, such as the effect of an intervention on gender ratios in higher education [10] or the analysis of multinational PISA data [11].

The papers in this Collection also include several studies on teacher attitudes that can be relevant to their professional development, whether on teachers’ attitude toward the inclusion of students with disabilities [12], on teachers’ emotions [13], or the role of teachers’ expectations about children’s socio-economic status and their performance [14].

This is only a small selection of the education research published at PLOS ONE over the years, and we welcome new submissions to this Collection.

For authors who are new to education research

If you want to know more about methodological and reporting standards in the field, we can recommend some useful resources such as:

  • American Psychologist’s Journal Article Reporting standards for both quantitative research (Appelbaum et al. 2018 [15]) and qualitative research (Levitt et al. 2018 [16]).
  • Annotated education research articles at the CBE-Life Science Education journal’s website (published by the American Society of Cell Biology).
  • The Institute of Education Sciences’ resources for researchers.

For papers describing new methods or programs, including teaching methods and class interventions, PLOS ONE has specific criteria of utility, validation, and availability (see more here). 

Credit: Jess Bailey

In general, we expect sufficient methodological details enabling other teachers and researchers to replicate a teaching intervention such as sample worksheets, a detailed lesson plan or curriculum or other educational materials. For any intervention, we look for sufficient details to assess its generalizability: how students were recruited, the frequency of class meetings, teachers’ experience, teaching objectives, school setting, but also detailed assessment methods and a comparison with existing methods. Educational evidence may consist both in quantitative assessments (such as pre/post test results) and qualitative evidence (such as student works and testimonies). 

We also expect papers to provide sufficient background about the study they report to embed its rationale in relevant scholarly discussions about education theory or motivate a pedagogical intervention with reference to teaching standards, when applicable.

We look forward to your continuing submissions in education research to PLOS ONE and are excited to see this Collection grow in the years to come!   

References

[1] Fazio, L. K., Kennedy, C. A., & Siegler, R. S. (2016). Improving children’s knowledge of fraction magnitudes. PLOS ONE, 11(10), e0165243.

[2] Weir, L. K., Barker, M. K., McDonnell, L. M., Schimpf, N. G., Rodela, T. M., & Schulte, P. M. (2019). Small changes, big gains: A curriculum-wide study of teaching practices and student learning in undergraduate biology. PLOS ONE, 14(8), e0220900.

[3] Brunsek, A., Perlman, M., Falenchuk, O., McMullen, E., Fletcher, B., & Shah, P. S. (2017). The relationship between the Early Childhood Environment Rating Scale and its revised form and child outcomes: A systematic review and meta-analysis. PLOS ONE, 12(6), e0178512.

[4] Bush, S. D., Rudd, J. A., Stevens, M. T., Tanner, K. D., & Williams, K. S. (2016). Fostering change from within: Influencing teaching practices of departmental colleagues by science faculty with education specialties. PLOS ONE, 11(3), e0150914.

[5] Diamond, A., Lee, C., Senften, P., Lam, A., & Abbott, D. (2019). Randomized control trial of Tools of the Mind: Marked benefits to kindergarten children and their teachers. PLOS ONE, 14(9), e0222447.

[6] Brunsek, A., Perlman, M., Falenchuk, O., McMullen, E., Fletcher, B., & Shah, P. S. (2017). The relationship between the Early Childhood Environment Rating Scale and its revised form and child outcomes: A systematic review and meta-analysis. PLOS ONE, 12(6), e0178512.

[7] Perlman, M., Fletcher, B., Falenchuk, O., Brunsek, A., McMullen, E., & Shah, P. S. (2017). Child-staff ratios in early childhood education and care settings and child outcomes: A systematic review and meta-analysis. PLOS ONE, 12(1), e0170256.

[8] Chowning, J. T., Griswold, J. C., Kovarik, D. N., & Collins, L. J. (2012). Fostering critical thinking, reasoning, and argumentation skills through bioethics education. PLOS ONE, 7(5), e36791.

[9] Arah, O. A., Hoekstra, J. B., Bos, A. P., & Lombarts, K. M. (2011). New tools for systematic evaluation of teaching qualities of medical faculty: results of an ongoing multi-center survey. PLOS ONE, 6(10), e25983.

[10] Sullivan, L. L., Ballen, C. J., & Cotner, S. (2018). Small group gender ratios impact biology class performance and peer evaluations. PLOS ONE, 13(4), e0195129.

[11] Stoet, Gijsbert, and David C. Geary. “Sex differences in mathematics and reading achievement are inversely related: Within-and across-nation assessment of 10 years of PISA data.” PLOS ONE 8.3 (2013): e57988.

[12] Vaz, S., Wilson, N., Falkmer, M., Sim, A., Scott, M., Cordier, R., & Falkmer, T. (2015). Factors associated with primary school teachers’ attitudes towards the inclusion of students with disabilities. PLOS ONE, 10(8), e0137002.

[13] Frenzel, A. C., Becker-Kurz, B., Pekrun, R., & Goetz, T. (2015). Teaching this class drives me nuts!-Examining the person and context specificity of teacher emotions. PLOS ONE, 10(6), e0129630.

[14] Speybroeck, S., Kuppens, S., Van Damme, J., Van Petegem, P., Lamote, C., Boonen, T., & de Bilde, J. (2012). The role of teachers’ expectations in the association between children’s SES and performance in kindergarten: A moderated mediation analysis. PLOS ONE, 7(4), e34502.

[15] Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 3.

[16] Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report. American Psychologist, 73(1), 26.

The post Introducing PLOS ONE’s Education Research Collection appeared first on EveryONE.