It takes two, or more, to tango. Why a focus on fair and equitable research collaborations is essential for global health – and how to achieve it.

Guest bloggers: Carel IJsselmuiden (COHRED, University of KwaZulu-Natal, South Africa); Bipasha Bhattacharya (COHRED, Corresponding Author, rfi@cohred.org), Julia Vallauri and Eric Martin (Institut de recherche pour le développement, France).

In celebration of World Health Day on April 7, 2022, the team behind the Research Fairness Initiative have written a guest blog on their effort to increase fairness in research collaborations across the globe. Although this initiative was inspired by research in global public health, the framework can be applied to any and all research collaborations in order to allow contributors to consider fairness and equity in place in their collaborative projects.

Good health is crucially dependent on research. On good research, on research that is excellent, relevant, ethical and also timely, perhaps. Such research is rarely done by individuals, in isolation, as a garage-based effort – although it could be. In reality, excellent research requires more than individuals – it requires top institutions, supportive environments, financing, supportive legislation and international treaties, rewards and awards, translation opportunities to scalable innovations, and much more. To capture this complexity, we will use the term ‘research systems’ or ‘research and innovation systems’.

Low- and middle-income countries (LMICs) often lack many of these components that are essential to a high-performing research and innovation system. That is no surprise, given that many other sectors in LMICs also lack many of the components that make these sectors work better in high-income environments. In fact, so strong is the generalized perception of under-performing research and innovation systems in LMICs, that even the many current calls for and initiatives to achieve better ‘pandemic preparedness’ rarely mention the need for capable R&D systems in LMICs. And this is in spite of overwhelming evidence that those LMICs with high performing R&D capabilities are not only less affected by ‘vaccine inequity’ but also delivered the largest contributions towards vaccinating the populations of other LMICs.

Capable research, development and innovation systems are a basic requirement for LMICs. They should invest themselves, and the ‘global community’ should support this. This may sound simple, but when looking at the dance floor of international research collaborations, the movements do not recognizably add up to a tango. The mostly divergent, project-based efforts driven by prescriptive (high-income country) funders rather than by national (LMIC) priorities, without clear links to financing the scaling of results, and without systematic efforts to improve tango skills, dancing shoes or the ball-room itself, are more conducive to sore toes, falls, profanities and disappointment than to achieving gradual increases in performance, outputs and outcomes.

‘Development agencies’ and philanthropies supporting LMICs rarely recognize the importance of developing research capabilities in LMICs as a essential to success, especially sustainable success. This applies particularly in global health research as the COVID-19 pandemic demonstrates so clearly.

Given this conundrum – i.e. the need for long-term support for the complex research systems in LMICs combined with the absence of any serious, targeted and long-term funding for this and the often low or absent own investments by LMICs themselves – we believe that the evidence is growing that a continuing effort to improve the fairness and equitability of research partnerships is both essential and catalytic. To go back to the dance floor – would the tango not be immensely more productive and enjoyable if all partners have access to similar skills, training, music or shoes – even if on loan for a while?

The Research Fairness Initiative (RFI) (https://rfi.cohred.org)

The RFI is a direct response to the need for a pragmatic instrument to improve how research and innovation partnerships with low- and middle-income countries (LMICs) can be improved continuously. The RFI is unique. It can generate the transparency and systematic institutional learning required to improve how organisations engage in and manage research and innovation collaborations in a fair and equitable manner for greater impact. While its priority focus was on collaborations between institutions in high and those in low- and middle-income countries, it is clear that the RFI is also appropriate to collaborations between high-income countries.

The RFI elevates research partnerships from ad hoc arrangements between individual researchers to key performance areas for all main actors in research and innovation, in particular:

• Research and Academic institutions
• Government Departments responsible for research and innovation
• National Research and Innovation Agencies
• Research Funders
• Private Sector organizations with a major research and innovation portfolio
• International organizations, large non-profits, and others.

Equitable and fair research collaborations are crucially important to enable LMICs to develop the excellence and sustainability of their research institutions and systems. At this time, the RFI provides the only pragmatic, systematic and global approach to improve the way research collaborations are done – even between high-income institutions themselves.

The RFI has been co-designed through wide and extensive global consultations. Its process can be viewed here : https://rfi.cohred.org/rfi-history/. Its continued improvement is done with all organisations using or supporting the RFI.

The RFI ‘System’ consists of two complementary components:

1. RFI Reporting – biennial institutional self-assessments. The RFI framework of questions and indicators provides a pragmatic tool for institutional self-assessment of the policies and practices used to promote fairness and equitability in their research collaborations. Its focus is forward: ‘How to improve policies and practices in the next 2 years’. Responding to the questions in the RFI Framework often provides a first opportunity for organisations to strategically and systematically assess their own partnership policies, practices and expectations. A short overview of questions and indicators can be found at: https://rfi.cohred.org/wp-content/uploads/RFI_Summary_Guide_1.pdf

2. The RFI Global Learning Platform aggregates and analyses the information provided by institutions in their RFI Reports. Once fully developed, it will provide both real-time and special reports to enhance the evidence base the world of research needs to improve research partnerships and, where possible, reach global agreements on standards or benchmarks.

The full RFI website information can be viewed here: https://rfi.cohred.org

For full certification, organisations have to publish their RFI reports on their own, corporate websites AND enable a comment function for readers.

Once complete, the RFI website will republish these reports and encourage further comments that will remain anonymous to the organisation. In this way, the RFI System should become a global platform for learning and action.

RFI Reporting Organisations

The RFI has had a slow but gradually increasing uptake since its release as ‘version 1’ in 2019. We now have reports from almost all the main research actors listed above and more. The COVID-19 pandemic did not help its progress, but the pace of completing reports is picking up. There are currently five completed reports available (Nova University of Lisbon (Portugal), World Health Organization (WHO/TDR), Univ Alioune Diop de Bambey (UADB, Senegal), IRD (Institut de Recherche pour le Développement, France), and the Swiss TPH (Switzerland)). There are four more submitted or close to completion (Fondation Botnar (Basel, Switzerland), University of Cape Town (South Africa), Epicentre Paris (MSF, France), CAPRISA (Centre for the AIDS Programme of Research in South Africa (South Africa)), and we are aware of several organisations in the process of obtaining approval to start their RFI reporting.

Although the number of institutions is small – looking at this list of ‘early adopters’, the results of these reports is likely to impact on many global, regional and national partnerships.

There have also been other uses of the RFI System, for example, The Philippines made 2021 the year of Research Fairness using the RFI, while the Ministers of Health of the Community of Portuguese Language Countries (CPLP) have recommended the RFI as guide for intra-CPLP health research collaborations.

The Evidence-base is growing

The sample is still small, but early lessons include:

• Completing the RFI Report is for many organisations the first and only time for strategical assessment of their own partnership policies, practices and expectations. It has proven to be an eye-opener for all, without exception.
• It is a challenge to change perspective – away from writing a ‘report card’ focusing on past performance towards preparing a two-year, forward looking improvement plan for fairness and equitability in research collaborations. Once the perspective has changed, it encourages interest, engagement, creativity and intent to learn how others are doing.
• There are original policies and practices not known outside organisations that are clearly going to be helpful to others – and can possibly generate global consensus for standards or benchmarks in future.
• Different organisations voiced different concerns. These may include a fear of funder backlash should responses to financial management in the RFI be answered inadequately. (We have informal evidence to the contrary – awareness of needs for support generate support). Additional administrative load which, additionally, needs to be paid for from scarce core funding. (With the new interactive web-based RFI reporting platform, the production of the first draft report takes less than a day – once the information to questions is available. At the same time, all questions are actually relevant for any self-respecting research actor. If it takes a lot of time to find the answers – that is not because the RFI is complex, but because you should have measured these indicators in the first place). Or the concerns may be getting comments from partners. (Actually, transparency is what enables discussion and negotiation – the basis for great and lasting partnerships).

Next steps for the RFI

It seems the RFI is in the early phase of adoption, and we anticipate a faster uptake from all constituencies and also from ‘enablers’, like journals and funders. We look forward to journals and funders making it a requirement for any lead organisation of research collaborations involving LMICs to submit their RFI reports. Imagine if everyone would play ball…

The RFI and World Health Day

Underlying global health is high quality research. Underlying high quality research are great researchers and research systems. Global health, resilience, pandemic preparedness and a future able to deal with environmental challenges will be well served by focusing far more clearly on the sustainable development of research and innovation capabilities in and with LMICs. Equitable and fair partnerships are essential in achieving this.

Key references:

Carvalho A, IJsselmuiden C, Kaiser K, et al. (2018) Towards equity in global health partnerships: adoption of the Research Fairness Initiative (RFI) by Portuguese-speaking countries. BMJ Global Health 3:e000978. http://dx.doi.org/10.1136/bmjgh-2018-000978

Lavery JV and IJsselmuiden C. (2018) The Research Fairness Initiative: Filling a critical gap in global research ethics [version 1; peer review: 2 approved]. Gates Open Res 2:58. https://doi.org/10.12688/gatesopenres.12884.1

IJsselmuiden C, Garner C, Ntoumi F, Montoya J, Keusch GT (2021) R&D – more than sharing vaccines. A complete change is needed in the approach to and funding of global preparedness. Think Global Health. https://www.thinkglobalhealth.org/article/rd-more-sharing-vaccines

Further reading:

In September 2021, PLOS launched a policy on Inclusivity in Global Research, which aims to improve reporting of global research. Authors conducting research of this nature who submit research to PLOS journals for consideration for publication may be asked to complete a questionnaire that outlines ethical, cultural, and scientific considerations specific to inclusivity in global research.

Disclaimer: Views expressed by contributors are solely those of individual contributors, and not necessarily those of PLOS.

Cover image credit: Rich Briggs, USGS. Public Domain.

The post It takes two, or more, to tango. Why a focus on fair and equitable research collaborations is essential for global health – and how to achieve it. appeared first on EveryONE.

Collaboration on the road to better preclinical research

In this blog post Adrian Smith from Norecopa discusses the role that the PREPARE Guidelines and Website play in improving the robustness and translatability of animal studies.

Criticism of animal research has come from an unexpected quarter in recent years: scientists themselves have expressed concerns about the poor reproducibility and translatability of preclinical studies [1,2]. The criticism includes, among other things, concerns about underpowered experiments, bias caused by a lack of randomisation and blinding, and incorrect use of statistical analyses in attempts to demonstrate significance.

Working inside animal facilities has given me insight into some additional causes of this state of play. From that viewpoint, it quickly becomes clear that high-quality animal research is utterly dependent upon collaboration, from the earliest possible stage, between scientists and animal care staff. While scientists have the insight and expertise to plan relevant experiments, they rely on those with intimate knowledge of the animals to translate these plans into valid and robust in vivo studies. Even simple events such as a scientist’s request for a blood sample trigger questions about a range of issues, including factors that affect the quality and shelf-life of the sample, and the physiological effects of the blood loss on the animal [3]. Similar practical questions arise for every aspect of a study, many of which will never be reported in the publications ensuing from the work. These include assessment of the facility’s standard and competence, staffing levels, refinement and standardisation of procedures, distribution of costs, and health and safety concerns. Early dialogue with animal care staff, who are often proficient at lateral thinking, may reveal better ways of conducting a procedure, thanks to their previous experience. If these questions are inadequately addressed, we risk that animal-related issues become the greatest sources of variability or poor validity in a study. Mutual respect for the skills and knowledge of scientists and lab animal staff alike is therefore paramount if we are to improve reproducibility and translatability.

Although better reporting is often promoted in connection with the reproducibility crisis, this is only part of the solution. There is no doubt that improved reporting is needed, both to aid evaluation of the quality of experiments and to enable them to be repeated. Inadequacies that have been demonstrated include faulty study design, poor descriptions of the animals, and insufficient detail about their environment and peri-operative care [4-6]. Reporting guidelines have been published for many years to address these issues [7], but without better planning we are merely trying to improve the description of a burnt cake: we need to go back to the kitchen and change the recipe. Lack of compliance with reporting guidelines, or even knowledge of them, despite journal endorsement, is also a major problem [8-9].

In 2016, the EU Commission was faced with a European Citizen’s Initiative (ECI) to ban animal experimentation [10]. Public opinion plays an important role in defining animal research, and we have both legal and ethical obligations not to waste animal lives. Part of the Commission’s response to the ECI was to hold a meeting entitled Non-Animal Approaches – The Way Forward in Brussels in December 2016 [11]. Here, almost for the first time, I heard participants discussing the need for planning guidelines. Spurred on by this, we published the PREPARE guidelines in 2017 [12]. PREPARE consists of a checklist covering 15 main areas (Figure 1) and a website with supplementary information and references to resources for each of the 40 topics on the checklist (norecopa.no/PREPARE).

Figure 1 – The PREPARE checklist (https://norecopa.no/PREPARE/prepare-checklist).

PREPARE guides scientists through all the steps of planning animal experiments. The principles embodied in PREPARE apply to all in vivo studies, both in dedicated facilities and in the field, regardless of species used and the level of their legal protection. The PREPARE checklist is divided into three sections: formulation of the study, dialogue between the scientists and animal care staff, and quality control of the components in the study. Some of these topics will be the responsibility of the animal facility rather than of the research team, but it is important that all have been considered. The first section gives advice on literature searching (including systematic reviews), legal aspects, harm-benefit assessment, humane endpoints and other ethical issues, experimental design and statistical analysis. The second section covers important practicalities such as the distribution of labour and responsibilities, facility evaluation and competence, as well as health and safety issues. The third section provides tips on quality control of all the stages of a preclinical study, including the test substances, procedures, the animals and their environment, husbandry methods, humane killing and necropsy routines. PREPARE aims to give practical advice which will ensure both scientific rigour and optimal animal welfare.

Although only recently published, the principles in PREPARE were developed over a 20-year period, during which earlier versions were discussed with scientists on courses in Laboratory Animal Science. PREPARE is also full of lessons learned by animal facilities when applying for international accreditation. The checklist has proved popular, it has been translated into over 20 languages, and the website is updated regularly as new resources are published. A 3-minute cartoon film, with optional subtitles in many languages, has been produced to illustrate the main principles embodied in PREPARE (norecopa.no/PREPARE/film).

We are now working to encourage uptake of PREPARE by scientists on a voluntary basis. This is being done by arranging workshops and webinars, through newsletters and social media, and by contacting research animal facilities. PREPARE is not intended to be yet another hurdle on the road to publication. On the contrary, it is our hope that PREPARE will be seen as a means of ensuring that all the issues likely to be raised by reviewers will have been addressed before it is too late. PREPARE should also prove helpful to those who evaluate proposals for animal studies, including funding bodies, ethical review boards and regulatory authorities.

To complete this cycle of increased quality, I would also like to see more emphasis in scientific papers on the efforts the authors have made to replace, reduce or refine animal use (“the three Rs”) [13]. This emphasis is unfortunately not a clear feature of reporting guidelines. We owe it to the general public, to our funders and to the animals. We must do this prominently, since many bibliographic databases index only the title and abstract of a paper. My 3-step recipe for better science is therefore:

  1. Be PREPAREd: plan in collaboration with animal care staff from day 1
  2. Demonstrate that you have ARRIVEd: submit a manuscript which documents how the potential causes of irreproducibility have been tackled
  3. Flag the 3Rs: highlight efforts to refine, reduce or replace animal use

I am optimistic. In the words of Edward Everett Hale [14]:

Coming together is a beginning, Keeping together is progress, Working together is success.

About the Author

Adrian Smith studied Veterinary Medicine at Cambridge University, graduating in 1979. After working in clinical veterinary practice in the UK, he emigrated to Norway and held positions related to animal research at the Norwegian School of Veterinary Science for 30 years. He was awarded his PhD in 1988 for his work on seasonal variations in the reproductive activity of the Arctic fox (Vulpes lagopus). He held the Chair in Laboratory Animal Science from 1988 to 2011, and led the work of obtaining and renewing international accreditation of the School’s laboratory animal facilities. During this period he also arranged over 50 courses in Laboratory Animal Science for scientists and technicians, and served on the Norwegian Animal Research Authority, during which time he was involved in writing the draft of new national legislation on animal research.
   Adrian has been Secretary of the Norwegian platform for the replacement, reduction and refinement of animal experiments (Norecopa) since its foundation in 2007. He has led the work of developing Norecopa’s website (norecopa.no) into a source of international resources for scientists and the laboratory animal community. The website currently has over 9,000 pages and a quarter of a million hits annually. He led the work of publishing the PREPARE guidelines for planning animal experiments. Collaboration with international colleagues is an important part of this work, and the PREPARE checklist has been translated into over 20 languages. Other tasks for Norecopa have included the organisation of international consensus meetings on the care and use of animals in research, the production of position statements and guidelines about animal procedures, and the regular issue of newsletters in English.

References

  1. Munafò MR, Nosek BA, Bishop DVM, Button KS, Chambers CD, Percie du Sert N, Simonsohn U, Wagenmakers E-J, Ware JJ, Ioannidis JPA (2017) A manifesto for reproducible science. Nat. Hum. Behav. 1, 0021.
  2. Begley CG, Ioannidis JP (2015) Reproducibility in science: improving the standard for basic and preclinical research. Circ Res. 116(1):116-126. doi:10.1161/CIRCRESAHA.114.303819
  3. Blood sampling. https://www.nc3rs.org.uk/3rs-resources/blood-sampling Accessed 30 September 2020
  4. Baker, M (2016) 1,500 scientists lift the lid on reproducibility. Nature 533:452–454. doi:10.1038/533452a
  5. Avey MT, Moher D, Sullivan KJ, Fergusson D, Griffin G, Grimshaw JM, Hutton B, Lalu MM, Macleod M, Marshall J, Mei SHJ, Rudnicki M, Stewart DJ, Turgeon AF, McIntyre L (2016) The Devil Is in the Details: Incomplete Reporting in Preclinical Animal Research. PLoSONE 11:e0166733. doi:doi:10.1371/journal.pone.0166733
  6. Bradbury AG, Eddleston M, Clutton RE (2016): Pain management in pigs undergoing experimental surgery; a literature review (2012-4). Br. J. Anaesth. 116: 37-45. https://doi.org/10.1093/bja/aev301
  7. Norecopa (2020) Reporting guidelines. https://norecopa.no/more-resources/reporting-guidelines Accessed 30 September 2020
  8. Reichlin TS, Vogt L, Wurbel H (2016): The Researchers’ View of Scientific Rigor-Survey on the Conduct and Reporting of In Vivo Research. PLoS One 11: e0165999. 2016/12/03. DOI: 10.1371/journal.pone.0165999.
  9. Percie du Sert N, Hurst V, Ahluwalia A, Alam S, Avey MT, Baker M, Browne WJ, Clark A, Cuthill IC, Dirnagl U, Emerson M, Garner P, Holgate ST, Howells DW, Karp NA, Lazic SE, Lidster K, MacCallum CJ, Macleod M, Pearl EJ, Petersen O, Rawle F, Peynolds P, Rooney K, Sena ES, Silberberg SD, Steckler T, Wurbel H (2020): The ARRIVE guidelines 2.0: updated guidelines for reporting animal research. PLoS Biol. 18(7): e3000410. doi: 10.1371/journal.pbio.3000410
  10. Stop Vivisection. European Citizens’ Initiative. https://europa.eu/citizens-initiative/stop-vivisection_en Accessed 30 September 2020
  11. Non-Animal Approaches – the Way Forward. Report on a European Commission Scientific Conference held on 6-7 December at The Egg, Brussels, Belgium. ISBN 978-92-79-65840-2. doi: 10.2779/373944 https://ec.europa.eu/environment/chemicals/lab_animals/3r/pdf/scientific_conference/non_animal_approaches_conference_report.pdf Accessed 30 September 2020
  12. Smith AJ, Clutton RE, Lilley E, Hansen KEAa, Brattelid T (2018) PREPARE: Guidelines for planning animal research and testing. Lab. Anim. 52(2): 135-141. doi: 10.1177/0023677217724823 Accessed 30 September 2020
  13. Norecopa (2020) The Three R’s. https://norecopa.no/alternatives/the-three-rs Accessed 30 September 2020
  14. Edward Everett Hale. https://en.wikiquote.org/wiki/Edward_Everett_Hale Accessed 30 September 2020

Featured Image: Pixabay CC0

Views expressed by Guest Bloggers are solely those of individual authors, and not necessarily those of PLOS.

The post Collaboration on the road to better preclinical research appeared first on EveryONE.

Introducing PLOS ONE’s Science of Stories Collection

Stories have the power to shape our identities and worldviews. They can be factual or fictional, text-based or visual and can take many forms—from novels and non-fiction to conspiracy theories, rumors and disinformation. We can characterize stories by their plot, their characters, their audience, their style, their themes or their purpose. Given the massive power of stories to alter the course of society, innovative methods to understand them empirically and quantitatively are necessary.

Today, we are pleased to introduce PLOS ONE’s Science of Stories Collection, which  includes submissions invited through a Call for Papers last year. The Call for Papers welcomed primary research papers that propose solutions to real world, data-rich problems that use different empirical methods. The Guest Editors overseeing the scope and curating the Collection are Peter Dodds (University of Vermont), Mirta Galesic (Santa Fe Institute), Matthew Jockers (Washington State University), and Mohit Iyyer (University of Massachusetts Amherst).

At launch, the Collection includes over 15 papers illustrating data-driven approaches to understanding stories and their impact. Some articles explore the nature of narrative and narrative thinking in texts and other media, for instance, the role of similarity in narrative persuasion, the effects of choosing violence in narratives, the importance of characters in narratives communicating risk of natural disaster, the impact of storytelling in complex collaborative tasks such as food preparation, and the role of narrative in collaborative reasoning and intelligence analysis.

Other articles present new methods to extract stories from datasets and datasets from stories, including automated narrative analysis via machine learning, systematic modeling of narrative structure and dynamics, and large-scale analysis of gender stereotypes in movies and books.

A third group of papers analyze how narratives are transformed and how they can transform people, for example, looking at the co-evolution of contagion (e.g., disease, addiction, or rumor) and behavior, social media’s contribution to political misperceptions in US elections, how people’s intuitive theories of physics can partly account for how they think about imaginary worlds, how narrative can induce empathy for people engaging in negative health behaviors, and the impact of mental health recovery narratives on health outcomes.

A final group of papers explores the communication of data-rich narratives to the public, including the relative effectiveness of video abstracts and plain language summaries versus graphical abstracts and published abstracts, newly emerging platforms for writing and commenting on literary texts at unprecedented scale, and the role of narrative in perceived authenticity in science communication.

Papers will continue to be added to the Collection as they reach publication, so we invite you to revisit the Collection again for additional insights into the science of stories.

Guest Editors

Peter Sheridan Dodds

Peter Dodds is Professor at the University of Vermont’s Department of Mathematics and Statistics. He is Director of the Vermont Complex Systems Center and co-runs the center’s Computational Story Lab. Having a general interest in stories and narratives, complexification, contagion, and robustness, Dodd’s research focuses on system-level, big data problems of all kinds, often networked, sociotechnical ones. His work has been supported by an NSF CAREER award to study sociotechnical phenomena, the McDonnell Foundation, the Office of Naval Research, NASA, the MITRE Corporation, Computer Associates, and Mass Mutual.

Mirta Galesic

Mirta Galesic is Professor and Cowan Chair in Human Social Dynamics at the Santa Fe Institute, External Faculty at the Complexity Science Hub in Vienna, Austria, and Associate Researcher at the Harding Center for Risk Literacy at the Max Planck Institute for Human Development in Berlin, Germany. She studies how simple cognitive mechanisms interact with social and physical environments to produce seemingly complex social phenomena. She develops empirically grounded computational models of social judgments, social learning, collective problem solving, and opinion dynamics. She is also interested in how people understand and cope with uncertainty and complexity inherent in many everyday decisions.

Mohit Iyyer

Mohit Iyyer is an Assistant Professor in computer science at the University of Massachusetts, Amherst. Previously, he was a Young Investigator at the Allen Institute for Artificial Intelligence. Mohit obtained his PhD at the University of Maryland, College Park, advised by Jordan Boyd-Graber and Hal Daumé III. His research interests lie in natural language processing and machine learning. Much of his work uses deep learning to model language at the discourse level, tackling problems like generating long coherent units of text, answering questions about documents and understanding narratives in fictional text.

Matthew L. Jockers

Matthew L. Jockers is Dean of the College of Arts & Sciences and Professor of English and Data Analytics at Washington State University in Pullman, WA. Jockers has been leveraging computation to understand narrative and style since the early 1990s.  His books on the subject include Macroanalysis: Digital Methods and Literary History, Text Analysis with R for Students of Literature, and The Bestseller Code.  In addition to his academic work, Jockers helped launch two text mining startups and worked as Principal Research Scientist and Software Development Engineer in iBooks at Apple.

 

The post Introducing PLOS ONE’s Science of Stories Collection appeared first on EveryONE.

Introducing the Targeted Anticancer Therapies and Precision Medicine in Cancer Collection

  While the rate of death from cancer has been declining since the 1990s, an estimated 9.6 million people died from cancer in 2018, making it the second-leading cause of death worldwide [1]. According to

‘Wicked problems’ and how to solve them

In this Guest Blog, PLOS ONE Academic Editor, Sieglinde Snapp, discusses the challenges faced in sustainability research to solve complex, so-called “Wicked Problems”, and how conferences such as Tropentag are bringing together researchers from multiple

A Big Paper for a Tiny Dinosaur

In paleontology, the fossil is the basic data point for any research, regardless of the amount of technology used. Consequently, descriptions of a fossil’s anatomy are critical for scientists answering a variety of questions. What species is this animal? Look to the fossil. What did it eat? Look at the teeth. Where does the animal fit on the evolutionary tree? Compare its fossil with other fossils. Detailed documentation and description of a specimen isn’t particularly glamorous, but absolutely necessary.

The tiny plant-eating dinosaur Fruitadens scurried through the underbrush of Colorado around 150 million years ago, long before the rise of the Rocky Mountains. First named in a brief article in 2010, Fruitadens made a splash for its diminutive length of less than 1 meter and estimated body mass of under 1 kilogram. Unfortunately, the original publication did not have space for more than a general anatomical description as well as confirmation that Fruitadens’ small size wasn’t because it was “just” a baby of a larger species. Thus, a new paper in PLoS ONE by Richard Butler, Laura Porro, Peter Galton, and Luis Chiappe fills in many of the essential details.

Artist’s reconstruction of Fruitadens. By Smokeybjb, licensed under Creative Commons Attribution-Share Alike 3.0 Unported license.

Fruitadens belonged to an unusual, widespread, and rare group of dinosaurs called heterodontosaurids. They first appeared around 200 million years ago in South Africa, and persisted until around 140 million years ago in England. Heterodontosaurids were small (no more than 2 meters in maximum body length) and characterized by unusual fangs at the front of their jaws. Fruitadens was no exception—although its lower jaw is incomplete, the preserved portion of the teeth shows that it too probaby had fangs. The rest of the teeth are more conventional, similar to those seen in other small plant-eating dinosaurs.

So, how did Fruitadens and other heterodontosaurids use their tiny, fanged jaws? The researchers developed simple two-dimensional models of the jaws in heterodontosaurids, reconstructing the movements associated with the bones and muscles. A basic difference between early and late-surviving heterodontosaurids (including Fruitadens) was identified. Specifically, Fruitadens and its close relatives had simpler jaw anatomy than their ancestors, suggestive of a switch to simpler, weaker, and more rapid jaw movements. Although much more work remains, Butler and colleagues suggest that Fruitadens may have been an ecological generalist subsisting on a variety of plants, insects, and other small organisms. This contrasts with the diet of its ancestors, subsisting primarily on plants.

A reconstruction of the skull of Fruitadens, from Butler et al. 2012.

Because they are so small, heterodontosaurid fossil are pretty scarce, and details of their evolutionary relationships are sketchy. Butler and colleagues carefully documented all of the relevant anatomical details in Fruitadens through photographs, CT scans, and text. In the process, the researchers identify some previously unrecognized features that characterize heterodontosaurids as a whole, and other formerly recognized features that do not. Although much work remains—particularly through the collection and description of new fossils—this new paper is an important step towards better understanding Fruitadens and its enigmatic kin.

REFERENCES

Butler RJ, Galton PM, Porro LB, Chiappe LM, Henderson DM, Erickson GM (2010) Lower limits of ornithischian dinosaur body size inferred from a diminutive new Upper Jurassic heterodontosaurid from North America. Proc Roy Soc B 277: 375–381.

Butler RJ, Porro LB, Galton PM, Chiappe LM (2012) Anatomy and cranial functional morphology of the small-bodied dinosaur Fruitadens haagarorum from the Upper Jurassic of the USA. PLoS ONE 7(4): e31556. doi:10.1371/journal.pone.0031556

IMAGE CREDITS:

Top image from http://en.wikipedia.org/wiki/File:Fruitadens.jpg, licensed under Creative Commons Attribution-Share Alike 3.0 Unported license.

Bottom image from Butler et al. 2012, Figure 1.

About the Author: Dr. Andrew Farke is a vertebrate paleontologist and an academic editor at PLoS ONE. He handled the manuscript described in this post. Andy also has a blog, The Open Source Paleontologist and can be followed via Twitter @andyfarke.

Multivariate Versus Univariate Conceptions of Sex Differences: Let the Contest Begin

Richard A. Lippa

The following guest post is written by Professor of Psychology, Richard A. Lippa. Dr. Lippa is a professor at California State University, Fullerton and is also a peer reviewer for PLoS ONE. In the following opinion piece, he comments on the paper, The Distance Between Mars and Venus: Measuring global sex differences in personality, which published in PLoS ONE today.

In their paper, “The Distance Between Mars and Venus: Measuring global sex differences in personality,” Del Giudice, Booth, and Irwing offer an interesting new perspective on sex differences and a useful critique of Hyde’s gender similarities hypothesis [1]. At core, Del Giudice and his colleagues ask: What is the proper metric to use when assessing sex differences in multivariate domains? They nominate the Mahalanobis D statistic—the multivariate generalization of the d statistic—as the best metric to assess sex differences in multi-trait individual differences domains such as personality, cognitive abilities, and interests, and they show empirically that, while on-average sex differences in traits from a given domain (e.g., personality) may be relatively small, the multivariate effect size (D) can simultaneously be quite large.

By way of analogy, consider sex differences in body shape. The Hyde “gender similarities” approach would assess specific traits—e.g., shoulder-waist ratios, waist-hip ratios, torso-to-leg-length ratios, etc.—and then average the d values across these traits, to arrive at the likely conclusion that men and women are more similar than different in body shape. In contrast, the Del Giudice, Booth, and Irwing multivariate approach would more likely generate the conclusion that sex differences in human body shape are quite large, with men and women having distinct multivariate distributions that overlap very little.

Which conclusion is correct? Although there are no God-given prescriptions for proper metrics of effect size, my guess is that lay people would agree more with the Mahalonobis D than with the “mean d” result—i.e., if asked to classify actual human body outlines as “male” or “female,” lay people would likely achieve extremely high levels of accuracy by intuitively aggregating across various body-shape dimensions and making “multivariate,” configural judgments, despite the fact that ds for some individual body traits might be low.

In advocating the use of the Mahalanobis D statistic, Del Giudice, Booth, and Irwing seem, to me, to be advocating the notion that sex differences in various domains are often multivariate and configural in nature. Such a multivariate approach is especially important in research that explores how well sex differences in personality, cognitive abilities, and interests predict sex differences in real-life criteria, such as participation in STEM (science, technology, engineering, and math) fields, susceptibility to mental and physical illnesses, and the tendency to engage in antisocial behaviors.

For example, to adequately explain men’s and women’s different participation in STEM fields, researchers need to consider sex differences in a variety of cognitive ability domains: various visuospatial skills, math abilities, mechanical aptitudes, and so on. A still more complete account would focus on sex differences in interests and personality as well. Men’s interests are, on average, considerably more thing-oriented and less people-oriented than women’s interest are, and women exceed men some on personality traits (e.g., agreeableness, warmth) that may not always find satisfying expression in STEM fields [2, 3].

This discussion of predicting real-life criteria leads to the two additional methodological recommendations made by Del Giudice, Booth, and Irwing: When assessing sex differences in psychological traits, researchers should ensure that (1) trait measures are reliable, and (2) traits are measured at the proper level of specificity. Regarding point (1): Although many gender researchers may not have the statistical expertise or inclination to compute latent factor measures, they nonetheless need to recognize that unreliable trait measures can attenuate sex differences and they must statistically correct for the unreliability of measures, when possible [4].

One nice feature of Del Guidice, Booth, and Irwing’s recommendations is that they can be put to an empirical test. This can be illustrated by research on how well sex differences in personality account for sex differences in antisocial behavior [5]. Del Giudice, Booth, and Irwing suggest that, because of their finer resolution, Big Five facet scores will predict sex differences in antisocial behavior better than Big Five factor scores. This is a testable proposition. They also suggest that when researchers predict sex differences in antisocial behavior from personality measures, they need to employ a multivariate approach to personality. Research shows that sex differences in a number of personality traits—e.g., components of agreeableness, conscientiousness, and neuroticism—contribute to sex differences in antisocial behavior [5]. Thus, the large sex differences in antisocial behavior that are apparent in everyday life probably reflect large multivariate sex differences in personality (in keeping with Del Guidice, Booth, and Irwing’s approach). Clearly, the power of the multivariate approach to predict sex differences in criteria such as antisocial behavior is open to empirical investigation.

It is ironic that while the “gender similarities hypothesis” has gained currency among some psychologists, many biological and medical researchers appear to be moving in the opposite direction, increasingly emphasizing the importance of sex differences in various physiological and disease processes [6]. Would biological and medical researchers entertain the Hydean proposition that “males and females are similar on most, but not all, biological variables”?  On some level, this assertion seems to be true but, as Del Giudice, Booth, and Irwing note, its truth value depends critically on the specific domain of sex differences under study and on the metric of similarity and difference that researchers use. In practical terms, Hyde’s vague “gender similarities hypothesis” will probably provide cold comfort to men and women seeking sound and specific medical advice concerning their heart disease, autoimmune disorders, or medication levels. In biology and medicine, as in psychology, I believe it will prove useful to take a multivariate approach to sex-linked traits in various domains, to acknowledge that some sex differences are small while others are large, and to keep one’s eye on the criteria that need to be predicted rather than on broad ideological statements.

Del Giudice, Booth, and Irwing’s title employs the much-used “Mars and Venus” metaphor, suggesting a seemingly astronomical separation between the sexes. This is undoubtedly an exaggeration, reflecting a kind of poetic license. Hyde prefers to speak of the distance between North Dakota and South Dakota. However, her metaphor may, inadvertently, reflect a truth she is unwilling to acknowledge: that if you travel from the multivariate “centroid” of one state to the other, you’ll still have a mighty long way to walk.

References

1. Hyde JS (2005). The gender similarities hypothesis. Amer Psychologist 60: 581-592.
2.Lippa RA (2005). Gender, nature, and nurture. Mahwah, NJ: Lawrence Erlbaum Associates.
3.Su R, Rounds J, Armstrong PI (2009). Men and things, women and people: A meta-analysis of sex differences in interests. Psych Bull, 135, 859-884.
4.Lippa RA (2006). The gender reality hypothesis. Amer. Psychologist 61: 639-640.
5.Moffit TE, Caspi A, Rutter M, Silva PA (2001). Sex differences in antisocial behavior. Cambridge, England: Cambridge University Press.
6.Blair ML (2007). Sex-based differences in physiology: What should we teach in the medical curriculum? Adv Physiol Educ, 31, 23-25.