“We are pleased to announce the release of version 3.0 of the resource types vocabulary. Since 2015, three COAR Controlled Vocabularies have been developed and are maintained by the Controlled Vocabulary Editorial Board: Resource types, access rights and version types. These vocabularies have a new look and are now being managed using the iQvoc platform, hosted by the University of Vienna Library.
Using controlled vocabularies enables repositories to be consistent in describing their resources, helps with search and discovery of content, and allows machine readability for interoperability. The COAR vocabularies are available in several languages, supporting multilingualism across repositories. They also play a key role in making semantic artifacts and repositories compliant with the FAIR Principles, in particular when it comes to findability and interoperability….”
“This is the objective that Inist wishes to achieve with its “? Open science thesaurus?” which has just been posted on its Loterre terminology platform ?: https://www.loterre.fr/skosmos/TSO/fr /
The terminological engineering department of Inist initiated this work by relying on existing glossaries in this field and on the open science taxonomy resulting from the FOSTER project. The terminological resource was then enriched thanks to a search of reference documents in the field.”
“We invite all interested to: write definitions, comment on existing definitions, add alternative definitions where applicable, and suggest relevant references. If you feel that key terms are missing, please add it – you can let us know, or ask contact us with suggestions in the FORRT slack or email firstname.lastname@example.org (please CC email@example.com during the period Feb 12 to March 1st). The full list of terms will form part of a larger glossary to be hosted on https://FORRT.org, once all terms have been added, the lead writing team (Parsons, Azevedo, & Elsherif) will develop an abridged version to submit as a manuscript. We outline the kinds of contributions and their correspondence to authorship in more detail in the next section. Don’t forget to add your name and details to the contributions spreadsheet….”
The purpose of this paper is to report a study of how research literature addresses researchers’ attitudes toward data repository use. In particular, the authors are interested in how the term data sharing is defined, how data repository use is reported and whether there is need for greater clarity and specificity of terminology.
To study how the literature addresses researcher data repository use, relevant studies were identified by searching Library Information Science and Technology Abstracts, Library and Information Science Source, Thomas Reuters’ Web of Science Core Collection and Scopus. A total of 62 studies were identified for inclusion in this meta-evaluation.
The study shows a need for greater clarity and consistency in the use of the term data sharing in future studies to better understand the phenomenon and allow for cross-study comparisons. Furthermore, most studies did not address data repository use specifically. In most analyzed studies, it was not possible to segregate results relating to sharing via public data repositories from other types of sharing. When sharing in public repositories was mentioned, the prevalence of repository use varied significantly.
Researchers’ data sharing is of great interest to library and information science research and practice to inform academic libraries that are implementing data services to support these researchers. This study explores how the literature approaches this issue, especially the use of data repositories, the use of which is strongly encouraged. This paper identifies the potential for additional study focused on this area.
“Such misuse of terms not only justifies the erroneous practice of research bureaucracy of evaluating research performance on those terms but also encourages editors of scientific journals and reviewers of research papers to ‘game’ the bibliometric indicators. For instance, if a journal seems to lack adequate number of citations, the editor of that journal might decide to make it obligatory for its authors to cite papers from journal in question. I know an Indian journal of fairly reasonable quality in terms of several other criteria but can no longer consider it so because it forces authors to include unnecessary (that is plain false) citations to papers in that journal. Any further assessment of this journal that includes self-citations will lead to a distorted measure of its real status….
An average paper in the natural or applied sciences lists at least 10 references.1 Some enterprising editors have taken this number to be the minimum for papers submitted to their journals. Such a norm is enforced in many journals from Belarus, and we, authors, are now so used to that norm that we do not even realize the distortions it creates in bibliometric data. Indeed, I often notice that some authors – merely to meet the norm of at least 10 references – cite very old textbooks and Internet resources with URLs that are no longer valid. The average for a good paper may be more than 10 references, and a paper with fewer than 10 references may yet be a good paper (The first paper by Einstein did not have even one reference in its original version!). I believe that it is up to a peer reviewer to judge whether the author has given enough references and whether they are suitable, and it is not for a journal’s editor to set any mandatory quota for the number of references….
Some international journals intervene arbitrarily to revise the citations in articles they receive: I submitted a paper with my colleagues to an American journal in 2017, and one of the reviewers demanded that we replace references in Russian language with references in English. Two of us responded with a correspondence note titled ‘Don’t dismiss non-English citations’ that we had then submitted to Nature: in publishing that note, the editors of Nature removed some references – from the paper2 that condemned the practice of replacing an author’s references with those more to the editor’s liking – and replaced them with, maybe more relevant, reference to a paper that we had never read by that moment! …
Editors of many international journals are now looking not for quality papers but for papers that will not lower the impact factor of their journals….”
“cOAlition S strategy of applying a prior licence to the Author’s Accepted Manuscript (AAM) is designed to facilitate full and immediate open access of funded scientific research for the greater benefit of science and society. It helps authors exercise their ownership rights on the AAM, so they can share it immediately in a repository under an open licence.
The manuscript – even after peer-review – is the intellectual creation of the authors. The RRS is designed to protect authors’ rights. The costs that publishers incur for the AAM, such as managing the peer-review process, are covered by subscriptions or publication fees. Delivering such publication services does therefore not entitle publishers to limit, constrain or appropriate ownership rights in the author’s AAM.
Some subscription publishers have recently put in place practices that attempt to prevent cOAlition S funded researchers from exercising their right to make their AAM open access immediately on publication.
The undersigned – cOAlition S funders and other stakeholders in academic publishing – wish to provide clarity to researchers about these practices, and caution them about the possible consequences….”
Abstract: Despite the calls for change, there is significant consensus that when it comes to evaluating publications, review, promotion, and tenure processes should aim to reward research that is of high “quality,” has an “impact,” and is published in “prestigious” journals. Nevertheless, such terms are highly subjective and present challenges to ascertain precisely what such research looks like. Accordingly, this article responds to the question: how do faculty from universities in the United States and Canada define the terms quality, prestige, and impact? We address this question by surveying 338 faculty members from 55 different institutions. This study’s findings highlight that, despite their highly varied definitions, faculty often describe these terms in overlapping ways. Additionally, results shown that marked variance in definitions across faculty does not correspond to demographic characteristics. This study’s results highlight the need to more clearly implement evaluation regimes that do not rely on ill-defined concepts.
“Open access (OA) is a set of principles and a range of practices through which research outputs are distributed online, free of cost or other access barriers. With open access strictly defined (according to the 2001 definition), or libre open access, barriers to copying or reuse are also reduced or removed by applying an open license for copyright….”
“I’d put this historically. “Gold OA” originally meant OA delivered by journals regardless of the journal’s business model. Both fee-based and no-fee OA journals were gold, as opposed to “green OA”, which meant OA delivered by repositories….”
“Tristan Louis gives weight to new term that I like a lot: fauxpen. Faux in French means “false” or “fake”. So fauxpen means fake open. There has always been a lot of that going around, but since the world of tech inevitably contains more of everything, there’s more fauxpen stuff than ever….”
“Furthermore, it appears that the turn toward open access in the scholarly communications landscape is increasingly facilitating the agendas of an oligopoly of for-profit data analytics companies. Perhaps realizing that “they’ve found something that is even more profitable than selling back to us academics the content that we have produced,”5 they venture ever further up the research stream, with every intent to colonize and canalize its entire flow.6 This poses a severe threat to the independence and quality of scholarly inquiry.7
In the light of these troubling developments, the expansion from Dotawo as a “diamond” open access to a common access journal represents a strong reaffirmation of the call that the late Aaron Swartz succinctly formulated in his “Guerilla Open Access Manifesto”: …
Swartz’s is a call to action that transcends the limitations of the open access movement as construed by the BOAI Declaration by plainly affirming that knowledge is a common good. His call goes beyond open access, because it specifically targets materials that linger on a paper or silicon substrate in academic libraries and digital repositories without being accessible to “fair use.” The deposition of the references from Dotawo contributions in a public library is a first and limited attempt to offer a remedy, heeding the “Code of Best Practices in Fair Use” of the www?Association of Research Libraries, which approvingly cites the late Supreme Court Justice Brandeis that “the noblest of human productions — knowledge, truths ascertained, conceptions, and ideas — become, after voluntary communication to others, free as the air to common use.”9 This approach also dovetails the interpretation of “folk law” recently propounded by Kenneth Goldsmith, the founder of public library www?Ubuweb….”
“The National Information Standards Organization (NISO) today announces the publication of its Recommended Practice, RP-31-2021, Reproducibility Badging and Definitions. Developed by the NISO Taxonomy, Definitions, and Recognition Badging Scheme Working Group, this new Recommended Practice provides a set of recognition standards that can be deployed across scholarly publishing outputs, to easily recognize and reward the sharing of data and methods….”