The “Pre” in [my] “Preprint” is for Pre-figurative · Commonplace

“When you are a PhD student that just published your research, and you decided to publish it as a preprint, one of the most common questions you then get asked is “Where is it now?” At least this has been my experience. Well-meaning PIs or postdocs, colleagues, strangers, that all really enjoyed reading your research and findings but also really want to know “which journal have you submitted it to now?” It’s important for them to know, or maybe it’s just a conversation starter while queuing for the cafeteria. Who knows. Your answer of “Journal X” will be followed by an understanding nod, an approving “yep, that one is a nice one,” the sharing of a funny story about their own experience with that journal. Certainly, and generally unspoken, some sort of gauging of the value of the research. Did you send it to a journal that matches the value they thought your research had? It’s a fun game to play.

Except I have nothing to answer.

They do not warn us about this when submitting to preprint servers, this ethology of post-preprint-submission in the life sciences. Maybe it should be under the disclaimer about preprints not being certified by peer-review. “Please make sure you are sending this to a journal too, or be prepared for very awkward small talk from here on.” “Make sure you also publish it for real.” They do not warn us but there are many, many signs. It’s in the way submission guidelines are written. It’s in the way preprints are talked about.undefined It’s in the way preprints are talked about in the preprint server itself.undefined It’s in the many spinoffs, add-ons, “overlay services” and “integrated pipelines” that make it more and more seamless and easy to transfer your preprint to a journal. It is in the expectations of those around you. It’s in the awkwardness of you admitting that no, you did not submit it anywhere (else?)….

I published my research on a preprint server. Making it accessible to everyone and without having to pay my way out of paywalls. More importantly, I published my research on a preprint server without plans to send it to a journal. And this is what I encourage fellow PhD students to do too. I published it with no format constraints, no figure limits, no restriction on the length of my materials and method, no cut-offs to the length of my bibliography, no “STAR-methods,” no word counts. I published it — because why not? — with an anomalously long introduction that reads more like a review and that could well be a publication of its own. I published it in my voice and in my style. I published it with an open license and without transferring copyright. I published it in its most liberated form.

Well-meaning voices will remind you that without peer-review your preprint still needs to go through the “print” process. Services now exist for that too (I should here thank ReviewCommonsundefined), and allow you to submit your preprint for review by experts in the field without having to submit it to a journal. So I did exactly that and got my preprint peer-reviewed. Peer-reviewed for its content and not for its fit for a journal. Peer-reviewed journal-independently. I posted the peer review comments publicly, alongside my research and with them, my answers.undefined Answers that I wrote not only to the reviewers themselves, but to the future readers of those reviews. I wrote them conversationally, like a three-way-dialogue, in a way that was open to possibilities and to discussion and not in a way constrained by imposed timeframes for revision and resubmission. I recommend this to my fellow PhD students too. Peer-review in a liberated and liberating form, where critical evaluation of your research are just that, critical evaluation by a peer. A contextualising opinion. Not the yes or no on whether your research should even deserve to be seen by the world….”

Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI) | Research Evaluation | Oxford Academic

The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Journal Citation Reports (JCRs), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) once included on Beall’s list of potential, possible or probable predatory scholarly open-access publishers, had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-citations in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals

Preprints in times of COVID19: the time is ripe for agreeing on terminology and good practices | BMC Medical Ethics | Full Text

Abstract:  Over recent years, the research community has been increasingly using preprint servers to share manuscripts that are not yet peer-reviewed. Even if it enables quick dissemination of research findings, this practice raises several challenges in publication ethics and integrity. In particular, preprints have become an important source of information for stakeholders interested in COVID19 research developments, including traditional media, social media, and policy makers. Despite caveats about their nature, many users can still confuse pre-prints with peer-reviewed manuscripts. If unconfirmed but already widely shared first-draft results later prove wrong or misinterpreted, it can be very difficult to “unlearn” what we thought was true. Complexity further increases if unconfirmed findings have been used to inform guidelines. To help achieve a balance between early access to research findings and its negative consequences, we formulated five recommendations: (a) consensus should be sought on a term clearer than ‘pre-print’, such as ‘Unrefereed manuscript’, “Manuscript awaiting peer review” or ‘’Non-reviewed manuscript”; (b) Caveats about unrefereed manuscripts should be prominent on their first page, and each page should include a red watermark stating ‘Caution—Not Peer Reviewed’; (c) pre-print authors should certify that their manuscript will be submitted to a peer-review journal, and should regularly update the manuscript status; (d) high level consultations should be convened, to formulate clear principles and policies for the publication and dissemination of non-peer reviewed research results; (e) in the longer term, an international initiative to certify servers that comply with good practices could be envisaged.


COAR releases resource types vocabulary version 3.0 for repositories with new look and feel – COAR

“We are pleased to announce the release of version 3.0 of the resource types vocabulary. Since 2015, three COAR Controlled Vocabularies have been developed and are maintained by the Controlled Vocabulary Editorial Board: Resource types, access rights and version types.  These vocabularies have a new look and are now being managed using the iQvoc platform, hosted by the University of Vienna Library.

Using controlled vocabularies enables repositories to be consistent in describing their resources, helps with search and discovery of content, and allows machine readability for interoperability. The COAR vocabularies are available in several languages, supporting multilingualism across repositories. They also play a key role in making semantic artifacts and repositories compliant with the FAIR Principles, in particular when it comes to findability and interoperability….”

Un thésaurus trilingue de la science ouverte dans Loterre (A trilingual open science thesaurus in Loterre)

From Google’s English:

“This is the objective that Inist wishes to achieve with its “? Open science thesaurus?” which has just been posted on its Loterre terminology platform ?: /

The terminological engineering department of Inist initiated this work by relying on existing glossaries in this field and on the open science taxonomy resulting from the FOSTER project. The terminological resource was then enriched thanks to a search of reference documents in the field.”

Glossary Organizing document – instructions for contributors (original doc) – Google Docs

“We invite all interested to: write definitions, comment on existing definitions, add alternative definitions where applicable, and suggest relevant references. If you feel that key terms are missing, please add it – you can let us know, or ask contact us with suggestions in the FORRT slack or email (please CC during the period Feb 12 to March 1st). The full list of terms will form part of a larger glossary to be hosted on, once all terms have been added, the lead writing team (Parsons, Azevedo, & Elsherif) will develop an abridged version to submit as a manuscript. We outline the kinds of contributions and their correspondence to authorship in more detail in the next section. Don’t forget to add your name and details to the contributions spreadsheet….”

Researcher attitudes toward data sharing in public data repositories: a meta-evaluation of studies on researcher data sharing | Emerald Insight

Abstract:  Purpose

The purpose of this paper is to report a study of how research literature addresses researchers’ attitudes toward data repository use. In particular, the authors are interested in how the term data sharing is defined, how data repository use is reported and whether there is need for greater clarity and specificity of terminology.


To study how the literature addresses researcher data repository use, relevant studies were identified by searching Library Information Science and Technology Abstracts, Library and Information Science Source, Thomas Reuters’ Web of Science Core Collection and Scopus. A total of 62 studies were identified for inclusion in this meta-evaluation.


The study shows a need for greater clarity and consistency in the use of the term data sharing in future studies to better understand the phenomenon and allow for cross-study comparisons. Furthermore, most studies did not address data repository use specifically. In most analyzed studies, it was not possible to segregate results relating to sharing via public data repositories from other types of sharing. When sharing in public repositories was mentioned, the prevalence of repository use varied significantly.


Researchers’ data sharing is of great interest to library and information science research and practice to inform academic libraries that are implementing data services to support these researchers. This study explores how the literature approaches this issue, especially the use of data repositories, the use of which is strongly encouraged. This paper identifies the potential for additional study focused on this area.

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

The Rights Retention Strategy and publisher equivocation: an open letter to researchers | Plan S

“cOAlition S strategy of applying a prior licence to the Author’s Accepted Manuscript (AAM) is designed to facilitate full and immediate open access of funded scientific research for the greater benefit of science and society. It helps authors exercise their ownership rights on the AAM, so they can share it immediately in a repository under an open licence.

The manuscript – even after peer-review – is the intellectual creation of the authors. The RRS is designed to protect authors’ rights. The costs that publishers incur for the AAM, such as managing the peer-review process, are covered by subscriptions or publication fees. Delivering such publication services does therefore not entitle publishers to limit, constrain or appropriate ownership rights in the author’s AAM.

Some subscription publishers have recently put in place practices that attempt to prevent cOAlition S funded researchers from exercising their right to make their AAM open access immediately on publication.

The undersigned – cOAlition S funders and other stakeholders in academic publishing – wish to provide clarity to researchers about these practices, and caution them about the possible consequences….”

How faculty define quality, prestige, and impact in research | bioRxiv

Abstract:  Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” has an “impact,” and is published in “prestigious” journals. Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact? We address this question by surveying 338 faculty members from 55 different institutions. This study’s findings highlight that, despite their highly varied definitions, faculty often describe these terms in overlapping ways. Additionally, results shown that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the need to more clearly implement evaluation regimes that do not rely on ill-defined concepts.


Open access

“Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers.[1] With open access strictly defined (according to the 2001 definition), or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright….”