Journal citation reports and the definition of a predatory journal: The case of the Multidisciplinary Digital Publishing Institute (MDPI) | Research Evaluation | Oxford Academic

The extent to which predatory journals can harm scientific practice increases as the numbers of such journals expand, in so far as they undermine scientific integrity, quality, and credibility, especially if those journals leak into prestigious databases. Journal Citation Reports (JCRs), a reference for the assessment of researchers and for grant-making decisions, is used as a standard whitelist, in so far as the selectivity of a JCR-indexed journal adds a legitimacy of sorts to the articles that the journal publishes. The Multidisciplinary Digital Publishing Institute (MDPI) once included on Beall’s list of potential, possible or probable predatory scholarly open-access publishers, had 53 journals ranked in the 2018 JCRs annual report. These journals are analysed, not only to contrast the formal criteria for the identification of predatory journals, but taking a step further, their background is also analysed with regard to self-citations and the source of those self-citations in 2018 and 2019. The results showed that the self-citation rates increased and was very much higher than those of the leading journals in the JCR category. Besides, an increasingly high rate of citations from other MDPI-journals was observed. The formal criteria together with the analysis of the citation patterns of the 53 journals under analysis all singled them out as predatory journals. Hence, specific recommendations are given to researchers, educational institutions and prestigious databases advising them to review their working relations with those sorts of journals

Preprints in times of COVID19: the time is ripe for agreeing on terminology and good practices | BMC Medical Ethics | Full Text

Abstract:  Over recent years, the research community has been increasingly using preprint servers to share manuscripts that are not yet peer-reviewed. Even if it enables quick dissemination of research findings, this practice raises several challenges in publication ethics and integrity. In particular, preprints have become an important source of information for stakeholders interested in COVID19 research developments, including traditional media, social media, and policy makers. Despite caveats about their nature, many users can still confuse pre-prints with peer-reviewed manuscripts. If unconfirmed but already widely shared first-draft results later prove wrong or misinterpreted, it can be very difficult to “unlearn” what we thought was true. Complexity further increases if unconfirmed findings have been used to inform guidelines. To help achieve a balance between early access to research findings and its negative consequences, we formulated five recommendations: (a) consensus should be sought on a term clearer than ‘pre-print’, such as ‘Unrefereed manuscript’, “Manuscript awaiting peer review” or ‘’Non-reviewed manuscript”; (b) Caveats about unrefereed manuscripts should be prominent on their first page, and each page should include a red watermark stating ‘Caution—Not Peer Reviewed’; (c) pre-print authors should certify that their manuscript will be submitted to a peer-review journal, and should regularly update the manuscript status; (d) high level consultations should be convened, to formulate clear principles and policies for the publication and dissemination of non-peer reviewed research results; (e) in the longer term, an international initiative to certify servers that comply with good practices could be envisaged.

 

COAR releases resource types vocabulary version 3.0 for repositories with new look and feel – COAR

“We are pleased to announce the release of version 3.0 of the resource types vocabulary. Since 2015, three COAR Controlled Vocabularies have been developed and are maintained by the Controlled Vocabulary Editorial Board: Resource types, access rights and version types.  These vocabularies have a new look and are now being managed using the iQvoc platform, hosted by the University of Vienna Library.

Using controlled vocabularies enables repositories to be consistent in describing their resources, helps with search and discovery of content, and allows machine readability for interoperability. The COAR vocabularies are available in several languages, supporting multilingualism across repositories. They also play a key role in making semantic artifacts and repositories compliant with the FAIR Principles, in particular when it comes to findability and interoperability….”

Un thésaurus trilingue de la science ouverte dans Loterre (A trilingual open science thesaurus in Loterre)

From Google’s English:

“This is the objective that Inist wishes to achieve with its “? Open science thesaurus?” which has just been posted on its Loterre terminology platform ?: https://www.loterre.fr/skosmos/TSO/fr /

The terminological engineering department of Inist initiated this work by relying on existing glossaries in this field and on the open science taxonomy resulting from the FOSTER project. The terminological resource was then enriched thanks to a search of reference documents in the field.”

Glossary Organizing document – instructions for contributors (original doc) – Google Docs

“We invite all interested to: write definitions, comment on existing definitions, add alternative definitions where applicable, and suggest relevant references. If you feel that key terms are missing, please add it – you can let us know, or ask contact us with suggestions in the FORRT slack or email sam.parsons@psy.ox.ac.uk (please CC flavio.azevedo@uni-jena.de during the period Feb 12 to March 1st). The full list of terms will form part of a larger glossary to be hosted on https://FORRT.org, once all terms have been added, the lead writing team (Parsons, Azevedo, & Elsherif) will develop an abridged version to submit as a manuscript. We outline the kinds of contributions and their correspondence to authorship in more detail in the next section. Don’t forget to add your name and details to the contributions spreadsheet….”

Researcher attitudes toward data sharing in public data repositories: a meta-evaluation of studies on researcher data sharing | Emerald Insight

Abstract:  Purpose

The purpose of this paper is to report a study of how research literature addresses researchers’ attitudes toward data repository use. In particular, the authors are interested in how the term data sharing is defined, how data repository use is reported and whether there is need for greater clarity and specificity of terminology.

Design/methodology/approach

To study how the literature addresses researcher data repository use, relevant studies were identified by searching Library Information Science and Technology Abstracts, Library and Information Science Source, Thomas Reuters’ Web of Science Core Collection and Scopus. A total of 62 studies were identified for inclusion in this meta-evaluation.

Findings

The study shows a need for greater clarity and consistency in the use of the term data sharing in future studies to better understand the phenomenon and allow for cross-study comparisons. Furthermore, most studies did not address data repository use specifically. In most analyzed studies, it was not possible to segregate results relating to sharing via public data repositories from other types of sharing. When sharing in public repositories was mentioned, the prevalence of repository use varied significantly.

Originality/value

Researchers’ data sharing is of great interest to library and information science research and practice to inform academic libraries that are implementing data services to support these researchers. This study explores how the literature approaches this issue, especially the use of data repositories, the use of which is strongly encouraged. This paper identifies the potential for additional study focused on this area.

Manipulation of bibliometric data by editors of scientific journals

“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….

An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….

Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! … 

Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”

The Rights Retention Strategy and publisher equivocation: an open letter to researchers | Plan S

“cOAlition S strategy of applying a prior licence to the Author’s Accepted Manuscript (AAM) is designed to facilitate full and immediate open access of funded scientific research for the greater benefit of science and society. It helps authors exercise their ownership rights on the AAM, so they can share it immediately in a repository under an open licence.

The manuscript – even after peer-review – is the intellectual creation of the authors. The RRS is designed to protect authors’ rights. The costs that publishers incur for the AAM, such as managing the peer-review process, are covered by subscriptions or publication fees. Delivering such publication services does therefore not entitle publishers to limit, constrain or appropriate ownership rights in the author’s AAM.

Some subscription publishers have recently put in place practices that attempt to prevent cOAlition S funded researchers from exercising their right to make their AAM open access immediately on publication.

The undersigned – cOAlition S funders and other stakeholders in academic publishing – wish to provide clarity to researchers about these practices, and caution them about the possible consequences….”

How faculty define quality, prestige, and impact in research | bioRxiv

Abstract:  Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” has an “impact,” and is published in “prestigious” journals. Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact? We address this question by surveying 338 faculty members from 55 different institutions. This study’s findings highlight that, despite their highly varied definitions, faculty often describe these terms in overlapping ways. Additionally, results shown that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the need to more clearly implement evaluation regimes that do not rely on ill-defined concepts.

 

Open access

“Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers.[1] With open access strictly defined (according to the 2001 definition), or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright….”

Thread by @petersuber on “Gold OA”

“I’d put this historically. “Gold OA” originally meant OA delivered by journals regardless of the journal’s business model. Both fee-based and no-fee OA journals were gold, as opposed to “green OA”, which meant OA delivered by repositories….”

Open Source is Everywhere, but So Is Fake Open Source | Hacker Noon

“Tristan Louis gives weight to new term that I like a lot: fauxpen. Faux in French means “false” or “fake”. So fauxpen means fake open. There has always been a lot of that going around, but since the world of tech inevitably contains more of everything, there’s more fauxpen stuff than ever….”