“It seems that barely a month goes by these days without another acquisition in the scholarly communications and publishing space. Most of the attention has focused on major acquisitions by Elsevier and Clarivate, particularly Elsevier’s recent acquisition of interfolio, the company behind the reporting tool researchFish, and Clarivate’s purchase of ProQuest at the end of last year. And to be sure, their movement towards scholarly workflow tools and platforms is an extremely important development. The recent news that the Copyright Clearance Center will acquire Ringgold is an important reminder that many other firms, including not-for-profits, are actively pursuing growth strategies that contain elements other than organic growth. It is also another confirmation of the extreme strategic value of infrastructure, including in particular the persistent identifiers, lovingly known as PIDs, that is needed to advance scholarly communication in an increasingly open access environment. And it raises the question of whether infrastructure will be managed openly through community governed organizations or the extent to which the sector can live with its privatization….”
“…The OpenAccess object
The OpenAccess object describes access options for a given work. It’s only found as part of the Work object.
Boolean: True if this work is Open Access (OA).
There are many ways to define OA. OpenAlex uses a broad definition: having a URL where you can read the fulltext of this work without needing to pay money or log in. You can use the alternate_host_venues and oa_status fields to narrow your results further, accommodating any definition of OA you like.
String: The Open Access (OA) status of this work. Possible values are:
gold: Published in an OA journal that is indexed by the DOAJ
green: Toll-access on the publisher landing page, but there is a free copy in an OA repository.
hybrid: Free under an open license in a toll-access journal.
bronze: Free to read on the publisher landing page, but without any identifiable license.
closed: All other articles.
String: The best Open Access (OA) URL for this work.
Although there are many ways to define OA, in this context an OA URL is one where you can read the fulltext of this work without needing to pay money or log in. The “best” such URL is the one closest to the version of record.
This URL might be a direct link to a PDF, or it might be to a landing page that links to the free PDF
“DOIs or Digital Object Identifers have become ubiquitous in our Scholarly ecosystem to the point that I find most researchers today are at least vaguely familar with the idea. You can possibly blame Sci-hub for this, but today many other tools and systems are comfortable with asking for DOIs without feeling the need to explictly explain what they are.
Similarly librarians even those not in the Scholarly Communication know of DOIs. Most of us know DOIs are meant as a unique persistant identifer for articles but having a DOI has no bearing on the quality of the article.
Still knowing of DOIs is not the same of actually having good or detailed understanding….
“We are excited to announce the first of a series of planned collaborations between ORCID and the OA Switchboard with the launch of ORCID-enabled smart matching in OA Switchboard. With their April 2022 release, OA Switchboard users will be able to leverage authoritative affiliation data from authors’ ORCID profiles to corroborate affiliation or organizational identifiers (such as ROR or Ringgold IDs) and ensure more accurate routing of the messages being shared between participants throughout the Open Access (OA) research cycle and publication journey, ultimately resulting in more complete and better quality metadata in the OA Switchboard messages for each article published. …”
“Over the past 10 years, stakeholders across the scholarly communications community have invested significantly not only to increase the adoption of ORCID adoption by researchers, but also to build the broader infrastructures that are needed both to support ORCID and to benefit from it. These parallel efforts have fostered the emergence of a “research information citizenry” between researchers, publishers, funders, and institutions. This paper takes a scientometric approach to investigating how effectively ORCID roles and responsibilities within this citizenry have been adopted. Focusing specifically on researchers, publishers, and funders, ORCID behaviors are measured against the approximated research world represented by the Dimensions dataset….”
“Join us live for this evidence-based look at how the different stakeholders deal with reporting on OA-related publication level information via a series of talks and use case presentations. We are actively building on the Reporting Made Easy use case that OA Switchboard launch customers and founding partners continue to work on collaboratively.
The OA Switchboard’s overarching focus will continue to be on authoritative data from source, interoperability of existing systems and connecting the dots of existing PIDs. All achieved in our characteristic, well established, collaborative spirit….”
“Whilst most journal websites only give the names of the editors, others possibly add a country, some include affiliations, very few link to a professional profile, an ORCID ID. Even when it’s clear when the editorial board details were updated, it’s hardly ever possible to find past editorial boards information and almost none lists declarations of competing interest.
We hear of instances where a researcher’s name has been listed on the board of a journal without their knowledge or agreement, potentially to deceive other researchers into submitting their manuscripts. Regular reports of impersonation, nepotism, collusion and conflicts of interest have become a cause for concern.
Similarly, recent studies on gender representation and gender and geographical disparity on editorial boards have highlighted the need to do better in this area and provide trusted, reliable and coherent information on editorial board members in order to add transparency, prevent unethical behaviour, maintain trust, promote and support research integrity….
We are proposing the creation of some form of Registry of Editorial Boards to encourage best practice around editorial boards’ information and governance that can easily be accessed and used by the community….”
“This Dear Colleague Letter describes and encourages effective practices for publicly sharing research data, including the use of persistent digital identifiers (PDIs).
Datasets underpinning published research findings are expected to be shared with other researchers, at no more than incremental cost and within a reasonable time. Data-sharing holds numerous benefits, from enabling broader research collaboration, through facilitating transparency and solidifying confidence in scientific research, to providing increased resources for teaching and education purposes. Recent studies found that research articles containing a link to data in a repository have markedly higher usage and visibility. Discoverable and citable data also serve to reduce barriers to entry for junior researchers, scientists in under-served communities, and researchers from underrepresented and minority groups, thus enabling improved implementation of open science principles.
The nature of digital data produced during research may vary among the different topical disciplines encompassed by the field of Materials Research. Most often, digital research data comprise one or more of the following: raw data files collected using experimental instrumentation and converted into digital format; digital files of processed experimental data; video and animation files; numerical data produced by computer simulations or computational models; computer code, scripts, software, software documentation and user manuals developed as part of the research project; digital files of theoretical models, protocols, and methods; educational, instructional, and training materials.
Open-access data sharing platforms (data repositories) comprise the most efficient way to publish and share research data1. Moreover, as long-term data curation and preservation are core to their mission, data repositories provide a stable means for data preservation. Upon publication of a dataset, most repositories automatically generate a citation for the data, which includes identifying metadata such as the archiving repository, the data’s author(s), and a PDI such as a digital object identifier (DOI). A DOI is a unique and persistent digital identifier, which, when assigned to a digital entity such as a dataset, remains unchanged over the lifetime of the object. Having a DOI (or other form of PDI) from an open-access repository renders data findable, accessible, and readily citable. Searchable global registries of data repositories provide information on indexed repositories to help researchers identify the most appropriate ones2. In the case where a suitable repository is not available, researchers are strongly encouraged to use their institutional digital repositories, which typically issue DOIs to institutionally hosted content….”
by Geoffrey Bilder
Just over a year ago, Crossref announced that our board had adopted the Principles of Open Scholarly Infrastructure (POSI).
It was a well-timed announcement, as 2021 yet again showed just how dangerous it is for us to assume that the infrastructure systems we depend on for scholarly research will not disappear altogether or adopt a radically different focus. We adopted POSI to ensure that Crossref would not meet the same fate.
23 Scholarly Communication Things by Queensland University of Technology is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.
I. Foundations of Scholarly Communication
Jennifer Hall; Eileen Salisbury; and Catherine Radbourne
Copyright and Creative Commons
Katya Henry; Rani McLennan; and David Cohen
Paula Callan; Tanya Harden; and Brendan Sinnamon
II. Research Data Management
Managing research data
Publishing research data
Philippa Frame and Stephanie Jacobs
Licensing research data
Philippa Frame and Stephanie Jacobs
III. Open Access
Open Access organisations and developments – National and international
Open Access Models
Ginny Barbour; Paula Callan; and Stephanie Jacobs
Open Educational Resources (OERs)
Katya Henry; Kate Nixon; and Sarah Howard
Which journal or book publisher to publish with
Paula Callan and Catherine Radbourne
Avoiding deceptive and vanity journals/conferences
Stephanie Jacobs; Catherine Radbourne; and Ginny Barbour
1. Persistent identifiers (PIDs)
Stephanie Jacobs; Paula Callan; Tanya Harden; and Brendan Sinnamon
Preprints, Preprint servers and Overlay journals
Ginny Barbour; Stephanie Jacobs; and Catherine Radbourne
Kate Harbison; Paula Callan; and Tanya Harden
V. Publication Metrics
Responsible use of metrics
Catherine Radbourne and Tanya Harden
Citation counts, author level metrics and journal rankings
Alice Steiner and Tanya Harden
Databases for metrics
“New articles submitted to arXiv are now automatically assigned DOIs that align with their arXiv ID. This makes research articles more discoverable across search engines because associated metadata is made available to the community in a reusable format.
DOIs (digital object identifiers) are unique and unchanging, just like the original arXiv ID number already assigned to every arXiv article. However, because DOIs are used across many different platforms, they enable greater interoperability with other services….”
“With ICOR we will be working on distributed experiments for collective gain. Some of the key areas of alignment between Arcadia and ICOR are in:
Developing Open Science Best Practices: As we develop our open science program, we will contribute to ICOR’s library of guidelines, sharing our approach, our documentation, and our learnings.
Creating an IP Toolbox: We believe that open science and commercialization do not have to be mutually exclusive. Establishing a strong and creative IP strategy is essential for proving that open science can support and speed our commercial pursuits. We have already learned from the resources provided by ICOR and are working on developing agreements and means of tracking our progress, which we will share back with the community.
Building Research Output Management Systems (ROMS) and Using Persistent Identifiers (PIDs): Scientists have traditionally relied on journals and journal articles to house and disseminate their data, but the journal system wasn’t built with today’s diverse and ever-expanding datasets in mind. New systems are needed to share and organize scientific research. Arcadia is committed to using PIDs to facilitate discoverability and to depositing data in repositories that meet FAIR (Findability, Accessibility, Interoperability, and Reuse) principles. Working towards shared data schemas for all research outputs will help facilitate discussion, review, and reuse.
Facilitated Collaboration: Collaboration is central to Arcadia’s success, and we aim to collaborate widely while maintaining our commitment to open science. We are in the process of developing our Collaborator Agreement, and will work with ICOR to share it, to track its success and any necessary revisions.
Modular Data and Review: It is current standard practice to release data and solicit peer review at the end of a project. We believe that releasing data more frequently and gathering and integrating community feedback more often and earlier in a project’s lifespan will accelerate science and produce better results.
Tracking Nano-Contributions: Author lists on journal articles do not accurately reflect a scientist’s contribution to a project and can promote territorialism and competition, rather than collaboration. Arcadia will be developing new methods for mapping contributions. These methods will provide a richer, more substantive picture of a person’s contribution, and ICOR will aid in measuring and tracking the success of these methods.
Metrics of Utilization: Knowing if and how people are using the data you produce is key to providing a valuable resource for the community. As we develop our ROMS, we will incorporate meaningful metrics to track utilization and will learn how to improve our data products to increase accessibility and reuse. …”
“CHORUS and ChemRxiv have signed a one-year Memorandum of Understanding (MOU) to pilot a preprint dashboard service.
By using persistent identifiers, CHORUS will create a dashboard for ChemRxiv that connects preprints to funders and datasets as well as information related to public accessibility and other key metadata to be added later. The Preprint Dashboard will aid in discoverability of preprints with the potential to provide non-ambiguous links between the preprint and published research, researchers, and their funding….”
“DOI enhances the accessibility, discoverability, trustability, and interoperability of digital objects and serves the openness and visibility of professionally published content. While I am not a DOI expert, I know about it because I use it a lot in my profession. I believe DOI will play a significant role in the automation of literature reviews. More than it does now.
It is the responsibility of librarians, information specialists and other information professionals to raise awareness about the benefits of DOI. …”
Abstract: With this work, we present a publicly available data set of the history of all the references (more than 55 million) ever used in the English Wikipedia until June 2019. We have applied a new method for identifying and monitoring references in Wikipedia, so that for each reference we can provide data about associated actions: creation, modifications, deletions, and reinsertions. The high accuracy of this method and the resulting data set was confirmed via a comprehensive crowdworker labeling campaign. We use the data set to study the temporal evolution of Wikipedia references as well as users’ editing behavior. We find evidence of a mostly productive and continuous effort to improve the quality of references: There is a persistent increase of reference and document identifiers (DOI, PubMedID, PMC, ISBN, ISSN, ArXiv ID) and most of the reference curation work is done by registered humans (not bots or anonymous editors). We conclude that the evolution of Wikipedia references, including the dynamics of the community processes that tend to them, should be leveraged in the design of relevance indexes for altmetrics, and our data set can be pivotal for such an effort.