Science is self-correcting: JPSP-PPID is not

With over 7,000 citations at the end of 2021, Ryff and Keyes (1995) article is one of the most highly cited articles in the Journal of Personality and Social Psychology. A trend analysis shows that citations are still increasing with over 800 citations in the past two years.

Most of these citations are reference to the use of Ryff’s measure of psychological well-being that uncritically accept Ryff’s assertion that her PWB measure is a valid measure of psychological well-being. The abstract implies that the authors provided empirical support for Ryff’s theory of psychological well-being.

Contemporary psychologists contrast Ryff’s psychological well-being (PWB) with Diener’s (1984) subjective well-being (SWB). In an article with over 1,000 citations, Ryff and Keyes (2002) tried to examine how PWB and SWB are empirically related. This attempt resulted in a two-factor model that postulates that SWB and PWB are related, but distinct forms of well-being.

The general acceptance of this model shows that most psychologists lack proper training in the interpretation of structural equation models (Borsboom, 2006), although graphic representations of these models make SEM accessible to readers who are not familiar with matrix algebra. To interpret an SEM model, it is only necessary to know that boxes represent measured variables, ovals represent unmeasured constructs, directed straight arrows represent an assumption that one construct has a causal influence on another construct, and curved bidrectional arrows imply an unmeasured common cause.

Starting from the top, we see that the model implies that an unmeasured common cause produces a strong correlation between two unmeasured variables that are labelled Psychological Well-Being and Subjective Well-Being. These labels imply that the constructs PWB and SWB are represented by unmeasured variables. The direct causal arrows from these unmeasured variables to the measured variables imply that PWB and SWB can be measured because the measured variables reflect the unmeasured variables to some extent. This is called a reflective measurement model (Borsboom et al., 2003). For example, autonomy is a measure of PWB because .38^2 = 14% of the variance in autonomy scores reflect PWB. Of course, this makes autonomy a poor indicator of PWB because the remaining 86% of the variance do not reflect the influence of PWB. This variance in autonomy is caused by other unmeasured influences and is called unique variance, residual variance, or disturbance. It is often omitted from SEM figures because it is assumed that this variance is simply irrelevant measurement error. I added it here because Ryff and users of her measure clearly do not think that 86% of the variance in the autonomy scale is just measurement error. In fact, the scale scores of autonomy are often used as if they are a 100% valid measure of autonomy. The proper interpretation of the model is therefore that autonomy is measured with high validity, but that variation in autonomy is only a poor indicator of psychological well-being.

Examination of the factor loadings (i.e., the numbers next to the arrows from PWB to the six indicators) shows that personal relationships has the highest validity as a measure of PWB, but even for personal relationships, the amount of PWB variance is only .66^2 = 44%.

In a manuscript (doc) that was desk-rejected by JPSP, we challenged this widely accepted model of PWB. We argued that the reflective model does not fit Ryff’s own theory of PWB. In a nutshell, Ryff’s theory of PWB is one of many list-theories of well-being (Sumner, 1996). The theory lists a number of attributes that are assumed to be necessary and sufficient for high well-being.

This theory of well-being implies a different measurement model in which arrows point from the measured variables to the construct of PWB. In psychometrics, these models are called formative measurement models. There is nothing unobserved about formative constructs. They are merely a combination of the measured constructs. The simplest way to integrate information about the components of PWB is to average them. If assumptions about importance are added, the construct could be a weighted average. This model is shown in Figure 2.

The key problem for this model is that it makes no predictions about the pattern of correlations among the measured variables. For example, Ryff’s theory does not postulate whether an increase in autonomy produces an increase in personal growth or a decrease in personal relations. At best, the distinction between PWB and SWB might imply that changes in PWB components are independent of changes in SWB components, but this assumption is highly questionable. For example, some studies suggest that positive relationships improve subjective well-being (Schimmack & Lucas, 2010).

To conclude, JPSP has published two highly cited articles that fitted a reflective measurement model to PWB indicators. In the desk-rejected manuscript, Jason Payne and I presented a new model that is grounded in theories of well-being and that treats PWB dimensions like autonomy and positive relations as possible components of a good life. Our model also clarified the confusion about Diener’s (1984) model of subjective well-being.

Ryff et al.’s (2002) two-factor model of well-being was influenced by Ryan and Deci’s (2001) distinction between two broad traditions in well-being research. “one dealing with happiness (hedonic well-being), and one dealing with human potential (eudaimonic well-being; Ryan &
Deci, 2001; see also Waterman, 1993)” (Ryff et al., 2002, p. 1007). We argued that this dichotomy overlooks another important distinction between well-being theories, namely the distinction between subjective and objective theories of well-being (Sumner, 1996). The key difference between objective and subjective theories of well-being is that objective theories aim to specify universal aspects of a good life that are based on philosophical analyses of the good life. In contrast, subjective theories reject the notion that universal criteria of a good life exist and leave it to individuals to create their own evaluation standards of a good life (Cantril., 1965). Unfortunately, Diener’s tripartite model of SWB is difficult to classify because it combines objective and subjective indicators. Whereas life-evaluations like life-satisfaction judgments are clearly subjective indicators, the amount of positive affect and negative affect implies a hedonistic conception of well-being. Diener never resolved this contradiction (Busseri & Sadava, 2011), but his writing made it clear that Diener stressed subjectivity as an essential component of well-being.

It is therefore incorrect to characterize Diener’s concept of SWB as a hedonic or hedonistic conception of well-being. The key contribution of Diener was to introduce psychologists to subjective conceptions of well-being and to publish the most widely used subjective measure of well-being, namely the Satisfaction with Life Scale. In my opinion, the inclusion of PA and NA in the tripartite model was a mistake because it does not allow individuals to choose what they want to do with their lives. Even Diener himself published articles that suggested positive affect and negative affect are not essential for all people (Suh, Diener, Oishi, & Triandis, 1998). At the very least, it remains an empirical question how important positive affect and negative affect are for subjective life evaluations and whether other aspects of a good life are even more important. At least, this question can be empirically tested by examining how much eudaimonic and hedonic measures of well-being contribute to variation in subjective measures of well-being. This question leads to a model in which life-satisfaction judgments are a criterion variable and the other variables are predictor variables.

The most surprising finding was that environmental mastery was a strong unique predictor and a much stronger predictor than positive affect or negative affect (direct effect, b = .66).

In our model, we also allowed for the possibility that PWB attributes influence subjective well-being by increasing positive affect or decreasing negative affect. The total effect is a very strong relationship, b = .78, with more than 50% of the variance in life-satisfaction being explained by a single PWB dimension, namely environmental mastery.

Other noteworthy findings were that none of the other PWB attribute made a positive (direct or indirect) contribution to life-satisfaction judgments. Autonomy even was a negative predictor. The effects of positive affect and negative affect were statistically significant, but small. This suggests that PA and NA are meaningful indicators of subjective well-being because the reflect a good life, but provide no evidence for hedonic theories of well-being that suggest positive affect increases well-being no matter how it is elicited.

These results are dramatically different from the published model in JPSP. In that model an unmeasured construct, SWB, causes variation in Environmental Mastery. In our model, environmental mastery is a strong cause of the only subjective indicator of well-being, namely life-satisfaction judgments. Whereas the published model implies that feeling good makes people have environmental mastery, our model suggests that having control over one’s life increases well-being. Call us crazy, but we think the latter model makes more sense.

So, why was our ms. desk rejected without peer-review from experts in well-being research? I post the full decision letter below, but I want to highlight the only comment about our actual work.

A related concern has to do with a noticeable gap between your research question, theoretical framework, and research design. The introduction paints your question in broad strokes only, but my understanding is that you attempt to refine our understanding of the structure of well-being, which could be an important contribution to the literature. However, the introduction does not provide a clear rationale for the alternative model presented. Perhaps even more important, the cross-sectional correlational study of one U.S. sample is not suited to provide strong conclusions about the structure of well-being. At the very least, I would have expected to see model comparison tests to compare the fit of the presented model with those of alternative models. In addition, I would have liked to see a replication in an independent sample as well as more critical tests of the discriminant validity and links between these factors, perhaps in longitudinal data, through the prediction of critical outcomes, or by using behavioral genetic data to establish the genetic and environmental architecture of these factors. Put another way, independent of the validity of the Ryff / Keyes model, the presented theory and data did not convince me that your model is a better presentation of the structure of well-being.

Bleidorn’s comments show that even prominent personality researchers lack basic understanding of psychometrics and construct validation. For example, it is not clear how longitudinal data can provide answers to questions about construct validity. Examining change is of course useful, but without a valid measure of a construct it is not clear what change in scale scores means. Construct validation precedes studies of stability and change. Similarly, it is only relevant to examine nature and nurture questions with a clear phenotype. Bleidorn completely ignores our distinction between hedonic and subjective well-being and the fact that we are the first to examine the relationship between PWB attributes and life-satisfaction.

As psychometricians have pointed out, personality psychologists often ignore measurement questions and are often content with averaged self-report ratings as operationalized constructs that do not require further validation. We think that this blind empiricism is preventing personality psychology from making real progress. It is depressing to see that even the new generation of personality psychologists shows no interest in improving construct validity of foundational constructs. Fortunately, JPSP-PPID publishes only about 50 articles a year and there are other outlets to publish our work. Unfortunately, JPSP has a reputation to publish only the best work, but this is prestige is not warranted by the actual quality of published articles. For example, the obsession with longitudinal data is not warranted given evidence that about 80% of the variance in personality measures is stable trait variance that does not change. Repeatedly measuring this trait variance does not add to our understanding of stable traits.

Conclusion

To conclude, JPSP has published two cross-sectional articles of the structure of well-being that continue to be highly cited. We find major problems with the models in these articles, but JPSP is not interested in publishing a criticism of these articles. To reiterate, the main problem is that Diener’s SWB model is treated as if it is an objective hedonic theory of well-being, when the core aspect of the model is that well-being is subjective and not objective. We thought at least the main editor Rich Lucas, a former Diener student, would understand this point, but expectations are the mother of disappointment. Of course, we could be wrong about some minor or major issues, but the lack of interest in these foundational questions shows just how far psychology is from being a real science. A real science develops valid measures before it examines real questions. Psychologists invent measures and study their measures without evidence that their measures reflect important constructs like well-being. Not surprisingly, psychology has produced no consensual theory of well-being that could help people live better lives. This does not stop psychologists from making proclamations about ways to lead a happy or good life. The problem is that these recommendations are all contingent on researchers’ preferred definition of well-being and the measures associated with that tradition/camp/belief system. In this way, psychology is more like (other) religions and less like a science.

Decision Letter

I am writing about your manuscript “Two Concepts of Wellbeing: The Relation Between Psychological and Subjective Wellbeing”, submitted for publication in the Journal of Personality and Social Psychology (JPSP). I have read the manuscript carefully myself, as has the lead Editor at JPSP, Rich Lucas. We read the manuscript independently and then consulted with each other about whether the manuscript meets the threshold for full review. Based on our joint consultation, I have made the decision to reject your paper without sending it for external review. The Editor and I shared a number of concerns about the manuscript that make it unlikely to be accepted for publication and that reduce its potential contribution to the literature. I will elaborate on these concerns below. Due to the high volume of submissions and limited pages available to JPSP, we must limit our acceptances to manuscripts for which there is a general consensus that the contribution is of an important and highly significant level. 
 

  1. Most importantly, papers that rely solely on cross-sectional designs and self-report questionnaire techniques are less and less likely to be accepted here as the number of submissions increases. In fact, such papers are almost always rejected without review at this journal. Although such studies provide an important first step in the understanding of a construct or phenomenon, they have some important limitations. Therefore, we have somewhat higher expectations regarding the size and the novelty of the contribution that such studies can make. To pass threshold at JPSP, I think you would need to expand this work in some way, either by using longitudinal data or or by going further in your investigation of the processes underlying these associations. I want to be clear; I agree that studies like this have value (and I also conduct studies using these methods myself), it is just that many submissions now go beyond these approaches in some way, and because competition for space here is so high, those submissions are prioritized.
  2. A related concern has to do with a noticeable gap between your research question, theoretical framework, and research design. The introduction paints your question in broad strokes only, but my understanding is that you attempt to refine our understanding of the structure of well-being, which could be an important contribution to the literature. However, the introduction does not provide a clear rationale for the alternative model presented. Perhaps even more important, the cross-sectional correlational study of one U.S. sample is not suited to provide strong conclusions about the structure of well-being. At the very least, I would have expected to see model comparison tests to compare the fit of the presented model with those of alternative models. In addition, I would have liked to see a replication in an independent sample as well as more critical tests of the discriminant validity and links between these factors, perhaps in longitudinal data, through the prediction of critical outcomes, or by using behavioral genetic data to establish the genetic and environmental architecture of these factors. Put another way, independent of the validity of the Ryff / Keyes model, the presented theory and data did not convince me that your model is a better presentation of the structure of well-being.
  3. The use of a selected set of items rather than the full questionnaires raises concerns about over-fitting and complicate comparisons with other studies in this area. I recommend using complete questionnaires and – should you decide to collect more data – additional measures of well-being to capture the universe of well-being content as best as you can. 
  4. I noticed that you tend to use causal language in the description of correlations, e.g. between personality traits and well-being measures. As you certainly know, the data presented here do not permit conclusions about the temporal or causal influence of e.g., neuroticism on negative affect or vice versa and I recommend changing this language to better reflect the correlational nature of your data.     

In closing, I am sorry that I cannot be more positive about the current submission. I hope my comments prove helpful to you in your future research efforts. I wish you the very best of luck in your continuing scholarly endeavors and hope that you will continue to consider JPSP as an outlet for your work.

Sincerely,
Wiebke Bleidorn, PhD
Associate Editor
Journal of Personality and Social Psychology: Personality Processes and Individual Differences

ResearchHub | Open Science Community

“ResearchHub’s mission is to accelerate the pace of scientific research. Our goal is to make a modern mobile and web application where people can collaborate on scientific research in a more efficient way, similar to what GitHub has done for software engineering.

Researchers are able to upload articles (preprint or postprint) in PDF form, summarize the findings of the work in an attached wiki, and discuss the findings in a completely open and accessible forum dedicated solely to the relevant article.

Within ResearchHub, papers are grouped in “Hubs” by area of research. Individual Hubs will essentially act as live journals within focused areas, within highly upvoted posts. (i.e the paper and its associated summary and discussion) moving to the top of each Hub.

To help bring this nascent community together and incentivize contribution to the platform, a newly created ERC20 token, ResearchCoin (RSC), has been created. Users receive RSC for uploading new content to the platform, as well as for summarizing and discussion research. Rewards for contributions are proportionate to how valuable the community perceives the actions to be – as measured by upvotes.”

 

China releases over 7.43 mln pieces of biological resource data

“The Chinese Academy of Sciences (CAS) has officially published a catalog of biological resources, with more than 7.43 million pieces of biological resource data released.

The catalog collects biological resource data from 72 resource libraries in 40 research institutes of the CAS, which includes biological specimens, plant resources, genetic resources, animal experiment resources, and biodiversity monitoring network resources.

All the resource data are available to the public on network portals, the CAS said….”

Unpatented Shot Dubbed ‘The World’s Covid-19 Vaccine’ Wins Emergency Approval in India

“An unpatented Covid-19 vaccine developed by the Texas Children’s Hospital, Baylor College of Medicine, and the pharmaceutical firm Biological E. Limited received emergency-use authorization from Indian regulators on Tuesday—news that the jab’s creators hailed as a potential turning point in the push to broaden global vaccine access.”

Apropos Data Sharing: Abandon the Distrust and Embrace the Opportunity | DNA and Cell Biology

Abstract:  In this commentary, we focus on the ethical challenges of data sharing and its potential in supporting biomedical research. Taking human genomics (HG) and European governance for sharing genomic data as a case study, we consider how to balance competing rights and interests—balancing protection of the privacy of data subjects and data security, with scientific progress and the need to promote public health. This is of particular relevancy in light of the current pandemic, which stresses the urgent need for international collaborations to promote health for all. We draw from existing ethical codes for data sharing in HG to offer recommendations as to how to protect rights while fostering scientific research and open science.

 

 

 

Preprint articles as a tool for teaching data analysis and scientific communication

Abstract:  The skill of analyzing and interpreting research data is central to the scientific process, yet it is one of the hardest skills for students to master. While instructors can coach students through the analysis of data that they have either generated themselves or obtained from published articles, the burgeoning availability of preprint articles provides a new potential pedagogical tool. We developed a new method in which students use a cognitive apprenticeship model to uncover how experts analyzed a paper and compare the professional’s cognitive approach to their own. Specifically, students first critique research data themselves and then identify changes between the preprint and final versions of the paper that were likely the results of peer review. From this activity, students reported diverse insights into the processes of data presentation, peer review, and scientific publishing. Analysis of preprint articles is therefore a valuable new tool to strengthen students’ information literacy and understanding of the process of science.

 

An integrated paradigm shift to deal with ‘predatory publishing’

The issue of ‘predatory publishing’, and indeed unscholarly publishing practices, affects all academics and librarians around the globe. However, there are some flaws in arguments and analyses made in several papers published on this topic, in particular those that have relied heavily on the blacklists that were established by Jeffrey Beall. While Beall advanced the discussion on ‘predatory publishing’, relying entirely on his blacklists to assess a journal for publishing a paper is problematic. This is because several of the criteria underlying those blacklists were insufficiently specific, excessively broad, arbitrary with no scientific validation, or incorrect identifiers of predatory behavior. The validity of those criteria has been deconstructed in more detail in this paper. From a total of 55 criteria in Beall’s last/latest 2015 set of criteria, we suggest maintaining nine, eliminating 24, and correcting the remaining 22. While recognizing that this exercise involves a measure of subjectivity, it needs to advance in order to arrive – in a future exercise – at a more sensitive set of criteria. Fortified criteria alone, or the use of blacklists and whitelists, cannot combat ‘predatory publishing’, and an overhaul of rewards-based academic publishing is needed, supported by a set of reliable criteria-based guidance system.

Ausschreibung: Referent/in in der EU-geförderten Forschungsinfrastruktur OPERAS (m/w/d) 80%E13, fixed-term until April 2023. Application deadline: Jan 28, 2022.

Die Max Weber Stiftung – Deutsche Geisteswissenschaftliche Institute im Ausland (MWS) sucht für ihre Geschäftsstelle in Bonn zum nächstmöglichen Zeitpunkt bis zum April 2023

eine Referentin / einen Referenten (m/w/d) 80% in der EU-geförderten Forschungsinfrastruktur OPERAS.

Die Max Weber Stiftung (www.maxweberstiftung.de) unterhält elf Forschungsinstitute und mehrere Au?ßenstellen in 15 Ländern. Sitz der Stiftung ist Bonn. Weltweit werden über 350 Mitarbeiterinnen und Mitarbeiter beschäftigt und zahlreiche Stipendiatinnen und Stipendiaten gefördert.
Seit 2017 ist die MWS in führender Position bei der EU-geförderten Forschungsinfrastruktur OPERAS (https://www.operas-eu.org/) engagiert. Damit ist sie am Aufbau einer forschungsgetriebenen Infrastruk?tur für die Geistes- und Sozialwissenschaften im europäischen Forschungsraum beteiligt. Im Rahmen von OPERAS läuft das Projekt TRIPLE (https://project.gotriple.eu/), das die multilinguale Discovery?Plattform GoTriple (https://www.gotriple.eu/) bis zur Nutzungsreife entwickelt und verfügbar macht, damit im europäischer Forschungsraum Literatur, Forschungsdaten, Projekte und Forschende identifiziert und gefunden werden können.
Für dieses innovative Projekt sucht die MWS eine Person, die den gesamten Bereich der Wissenschafts?kommunikation und Vernetzung unter den Projektbeteiligten von TRIPLE selbst, dann innerhalb der For?schungsinfrastruktur OPERAS und schließlich auf europäischer und deutschen Ebene wahrnimmt.

Aufgaben:
• Sie leiten das Arbeitspaket „Communication & Dissemination“ im Projekt TRIPLE und koordi?nieren Arbeitsgruppen zu verschiedenen Themen,
• Sie verantworten die Kommunikationsstrategie im Projekt und sorgen auf diese Weise für eine koordinierte Bereitstellung aller relevanten Informationen innerhalb des Projekts TRIPLE und der Forschungsinfrastruktur OPERAS,
• Sie organisieren die Wissenschaftskommunikation mit ihren verschiedenen Formaten (z.B. Pro?jekt-Website, Twitter, Mailinglisten), über die die Ergebnisse im Projekt TRIPLE in der Wissen?schafts-Community verbreitet werden,
• Sie kümmern sich um die Fortentwicklung und Bereitstellung von Kommunikationsmaterial für TRIPLE (virtuell und im Druck),
• Sie organisieren Konferenzen, Workshops und Webinare,
• Sie sind zuständig für das Projektbudget in Ihrem Aufgabenbereich (gemeinsam mit einer Kol?legin aus der Verwaltung),
• Sie dokumentieren die Arbeit in diesem Projekt und sind verantwortlich für das Berichtswesen. Ihr Profil?

Voraussetzungen:
•ein Abschluss (Master) eines Studiums in einem geistes-, sozial-, bibliotheks- oder informati?onswissenschaftlichen Fach,
• Exzellente Kommunikationsfähigkeit und Organisationsfähigkeit, – 2 –
• Versierter Umgang mit Textverarbeitungs-, Tabellenkalkulations- und Präsentationsprogram?men,
• Kenntnisse in einem geläufigen Webcontentmanagementsystem, vorzugsweise WordPress,
• Erfahrung in der Handhabung sozialer Medien (Blogs, Twitter, Facebook, LinkedIn),
• Sichere Englischkenntnisse (Projektsprache) in Wort und Schrift auf Level C1; Französisch?kenntnisse sind von Vorteil.

Wünschenswert:
• Kenntnisse und Erfahrungen in den Methoden und Konzepten der Digital Humanities, beson?ders in Bezug auf das digitale Publizieren,
• Einschlägige IT-Kompetenzen, etwa in den Bereichen Webtechnologien, X-Technologien und Softwarearchitekturen,
• Kenntnisse im Bereich Open Science.

Sie kooperieren intensiv mit den Projektpartnern aus mehreren europäischen Ländern. Dazu gehören auch gelegentliche Reisen. Dementsprechend bringen Sie eine große Offenheit für diese Form des wis?senschaftsorganisatorischen Arbeitens mit und gestalten die Projektprozesse aktiv mit. Dazu gehört auch die Fähigkeit, sowohl mit Vertretern sowohl der IT- und Digital Humanities als auch aus geistes?wissenschaftlichen Fächern zu kooperieren.

Wir bieten bei Vorliegen der Voraussetzungen eine Vergütung bis zur Entgeltgruppe 13 TVöD (Bund) mitsamt den tariflichen Nebenleistungen und der Möglichkeit eines Jobtickets. Es besteht die Möglichkeit zur Arbeit im Home Office. Die Stelle ist grundsätzlich teilzeitgeeignet.

Die Max Weber Stiftung ist ein nichtdiskriminierender Arbeitgeber und legt großen Wert auf die Verein?barkeit von Beruf und Familie. Schwerbehinderte Menschen werden bei gleicher Eignung, Befähigung und fachlicher Leistung bevorzugt berücksichtigt.

Für weitere Auskünfte steht Ihnen Herr Dr. Michael Kaiser (Tel. 0228-377 86 24) zur Verfügung. Ihre Bewerbung richten Sie bitte bis zum 28. Januar 2022 an folgende Emailadresse: ope?ras_triple(at)maxweberstiftung.de.

Die Vorstellungsgespräche sind für den 8. Februar 2022 in Bonn geplant. Je nach Stand der pandemie?bedingten Einschränkungen behält die MWS sich vor, die Gespräche per Vide

Paris Open Science European Conference (OSEC), Feb 04-05, 2022 @ the French Academy of Sciences, Paris, France | French presidency of the European Council

rance is organising a major international event in the context of the French Presidency of the European Union:

Paris Open Science European Conference (OSEC)
On Friday 4th and Saturday 5th February 2022
at the French Academy of Sciences, Paris, France

This international conference is being organised with the strong support of the Ministry of Higher Education, Research and Innovation, the French Academy of Sciences, the French National Center for Scientific Research (CNRS), the National Institute of Health and Medical Research (Inserm), the High Council for Evaluation of Research and Higher Education (Hcéres), the National Research Agency (ANR), the University of Lorraine and the University of Nantes.

The main topics addressed during this conference come within the framework of the transformation of the research and innovation ecosystem in Europe. Particular attention will be given to transparency in health research, the necessary transformation of research evaluations, the future of scientific publishing, and the opening of codes and software produced in a scientific context..

The Moonshot: Crowdsourcing To Develop The First Open-Source, Generic COVID-19 Antiviral Pill – Health Policy Watch

“A global grassroots movement of scientists based on crowdsourcing ideas, expertise, and goodwill has already generated – and freely released – more than half of the known structural information on the main protease of SARS-CoV-2. Based on this, they are now on a quest for an open-source drug that can block the virus from replicating….”

Three crowdsourcing opportunities with the British Library | Digital scholarship blog @ BL

Digital Curator Dr Mia Ridge writes, In case you need a break from whatever combination of weather, people and news is around you, here are some ways you can entertain yourself (or the kids!) while helping make collections of the British Library more findable, or help researchers understand our past. You might even learn something or make new discoveries along the way!