GigaScience and GigaByte Groups Join Sciety

Over the last month, we have added two new groups, GigaScience and GigaByte, from the journals of the same name, increasing the number of specialist teams displaying their evaluations on Sciety.

GigaScience and GigaByte are part of GigaScience Press. With a decade-long history of open-science publishing, they aim to revolutionise publishing by promoting reproducibility of analyses and data dissemination, organisation, understanding, and use. As open-access and open-data journals, they publish all research objects (publishing data, software and workflows) from ‘big data’ studies across the life and biomedical sciences. These resources are managed using the FAIR Principles for scientific data management and stewardship, which state that research data should be Findable, Accessible, Interoperable and Reusable. They also follow the practices of transparency and openness in science publishing, and as such, they embrace open peer review (which is mandated for both journals) and preprints (which are strongly encouraged in GigaScience and mandated for GigaByte). The opportunities for combining both are covered by GigaScience in its video on open science and preprint peer review for Peer Review Week.

 

Advancing Self-Evaluative and Self-Regulatory Mechanisms of Scholarly Journals: Editors’ Perspectives on What Needs to Be Improved in the Editorial Process

Meticulous self-evaluative practices in the offices of academic periodicals can be helpful in reducing widespread uncertainty about the quality of scholarly journals. This paper summarizes the results of the second part of a qualitative worldwide study among 258 senior editors of scholarly journals across disciplines. By means of a qualitative questionnaire, the survey investigated respondents’ perceptions of needed changes in their own editorial workflow that could, according to their beliefs, positively affect the quality of their journals. The results show that the most relevant past improvements indicated by respondents were achieved by: (a) raising the required quality criteria for manuscripts, by defining standards for desk rejection and/or shaping the desired qualities of the published material, and (b) guaranteeing a rigorous peer review process. Respondents believed that, currently, three areas have the most pressing need for amendment: ensuring higher overall quality of published articles (26% of respondents qualified this need as very high or high), increasing the overall quality of peer-review reports (23%), and raising reviewers’ awareness of the required quality standards (20%). Bivariate analysis shows that respondents who work with non-commercial publishers reported an overall greater need to improve implemented quality assessment processes. Work overload, inadequate reward systems, and a lack of time for development activities were cited by respondents as the greatest obstacles to implementing necessary amendments.

„Forum 13+“-Spektrum zur Bewertung von Open Access-Transformationsverträgen und Verlagsangeboten: Stand Oktober 2021 (spectrum for the evaluation of Open Access transformation contracts and publishing offers: Status October 2021)

via deepl.com: The following spectrum for the evaluation of OA transformation contracts and publishing offers is a result of the work of the independent working group “Forum 13+” and is primarily aimed at negotiators of OA transformation contracts and thus at acquisition and licensing experts at academic libraries and library consortia.

German original:

Das folgende Spektrum zur Bewertung von Open Access-Transformationsverträgen und Verlagsangeboten ist ein Arbeitsergebnis der unabhängigen Arbeitsgruppe „Forum 13+“ und richtet sich in erster Linie an Verhandler*innen von Open Access-Transformationsverträgen und damit an die Erwerbungs- und Lizenzierungsexpert*innen an wissenschaftlichen Bibliotheken und an Bibliothekskonsortien.

Bringing arts and humanities perspectives to the redefinition of ”what counts” in research(er) evaluation | DARIAH Open

by Erzsébet Tóth-Czifra

Research assessment (i.e. decisions on allocation of research funds, academic career advancement, and the hiring of staff) has been recognized as the Achilles heel of firmly grounding Open Science practices in research realities for a long while now. In academia scholars are still facing conflicting injunctions and have to walk on both paved and unpaved paths while advocacy and research policy efforts repeatedly point out enormous complexities, systemic impediments and failed attempts in upscaling thoughtful, alternative proxies that could replace the the current harmful system dominated by publisher prestige. 

As an example of the many voices urging and supporting this change, in DARIAH’s response to the the stakeholder consultation on the  Future of scholarly publishing and scholarly communication  European Commission report in 2019, we argue that the vicious circle  in which research evaluation is lingering can only be broken through a set of urgent and harmonized actions and call for a new social contract between  on the European level involving funders, research performing institutions and their ministries, university networks, disciplinary communities and research infrastructure providers (including publishers).

Gadd (2021) Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up | Frontiers in Research Metrics and Analytics

Gadd, Mis-Measuring Our Universities: Why Global University Rankings Don’t Add Up. Frontiers in Research Metrics and Analytics. https://doi.org/10.3389/frma.2021.680023

Abstract: Draws parallels between the problematic use of GDP to evaluate economic success with the use of global university rankings to evaluate university success. Inspired by Kate Raworth’s Doughnut Economics, this perspective argues that the pursuit of growth as measured by such indicators creates universities that ‘grow’ up the rankings rather than those which ‘thrive’ or ‘mature.’ Such growth creates academic wealth divides within and between countries, despite the direction of growth as inspired by the rankings not truly reflecting universities’ critical purpose or contribution. Highlights the incompatibility between universities’ alignment with socially responsible practices and continued engagement with socially irresponsible ranking practices. Proposes four possible ways of engendering change in the university rankings space. Concludes by calling on leaders of ‘world-leading’ universities to join together to ‘lead the world’ in challenging global university rankings, and to set their own standards for thriving and maturing universities.

OPERAS report “Future of Scholarly Communication. Forging an inclusive and innovative research infrastructure for scholarly communication in Social Sciences and Humanities” | Zenodo

Avanço, Karla, Balula, Ana, B?aszczy?ska, Marta, Buchner, Anna, Caliman, Lorena, Clivaz, Claire, … Wieneke, Lars. (2021, June 29). Future of Scholarly Communication . Forging an inclusive and innovative research infrastructure for scholarly communication in Social Sciences and Humanities. Zenodo. https://doi.org/10.5281/zenodo.5017705

 

This report discusses the scholarly communication issues in Social Sciences and Humanities that are relevant to the future development and functioning of OPERAS. The outcomes collected here can be divided into two groups of innovations regarding 1) the operation of OPERAS, and 2) its activities. The “operational” issues include the ways in which an innovative research infrastructure should be governed (Chapter 1) as well as the business models for open access publications in Social Sciences and Humanities (Chapter 2). The other group of issues is dedicated to strategic areas where OPERAS and its services may play an instrumental role in providing, enabling, or unlocking innovation: FAIR data (Chapter 3), bibliodiversity and multilingualism in scholarly communication (Chapter 4), the future of scholarly writing (Chapter 5), and quality assessment (Chapter 6). Each chapter provides an overview of the main findings and challenges with emphasis on recommendations for OPERAS and other stakeholders like e-infrastructures, publishers, SSH researchers, research performing organisations, policy makers, and funders. Links to data and further publications stemming from work concerning particular tasks are located at the end of each chapter.

An easy access dashboard now provides links to scientific discussion and evaluation of bioRxiv preprints.

“Part of our mission at bioRxiv is to alert readers to reviews and discussion of preprints and support the different ways readers provide feedback to authors on their work. These include tweets, comments on preprints and community- or journal-organized peer reviews. bioRxiv improves discoverability of such efforts by linking to peer reviews, community discussions and mentions of the preprint in social and traditional media. By aggregating this information in a new dashboard, we are now making these even easier for readers to find and access.

A series of new icons now appears in the dashboard launch bar, above each Abstract, representing different sources of preprint discussion or evaluation; the numbers of each evaluation or interaction are shown, and clicking on one of the icons opens a dashboard with details of the entries in that section….”